Gentoo virtual/pam removal

October 14th, 2019

I had to remove virtual/pam today as part of the Gentoo FreeBSD removal, as it is masked. See bug #697630. Unfortunately this will cause your next emerge to fail, as many of the things you’ve installed may have a dependency on virtual/pam and not sys-libs/pam, and it will want you to unmask virtual/pam to resolve it. Perhaps if you force emerge to backtrack far enough it will come to the right conclusion. Rather than attempt that, I decided to rebuild everything that depended on virtual/pam, just to keep these boxes clean:

qdepends -q -Q virtual/pam | cut -f1 -d: | sed -e 's/^/=/g' | xargs emerge -1

This was easier than finding out which things were blocking emerge one-at-a-time.

Odd gentoo merge problem for libvirt

July 3rd, 2018

Somehow, during the upgrade from libvirt 4.1 to 4.3, I was having a problem building and installing the new libvirt, getting errors such as the following:

qemu_monitor_json.c:(.text+0x2de6): undefined reference to `virJSONValueGetType'
./.libs/libvirt_driver_qemu_impl.a(libvirt_driver_qemu_impl_la-qemu_monitor_json.o):qemu_monitor_json.c:(.text+0x58b6): more undefined references to `virJSONValueGetType' follow
./.libs/libvirt_driver_qemu_impl.a(libvirt_driver_qemu_impl_la-qemu_monitor_json.o): In function `qemuMonitorJSONGetMigrationParams':
qemu_monitor_json.c:(.text+0x666c): undefined reference to `virJSONValueObjectStealObject'
./.libs/libvirt_driver_qemu_impl.a(libvirt_driver_qemu_impl_la-qemu_monitor_json.o): In function `qemuMonitorJSONGetDumpGuestMemoryCapability':
qemu_monitor_json.c:(.text+0x7306): undefined reference to `virJSONValueGetType'
./.libs/libvirt_driver_qemu_impl.a(libvirt_driver_qemu_impl_la-qemu_monitor_json.o): In function `qemuMonitorJSONGetBlockIoThrottle':
qemu_monitor_json.c:(.text+0xa5ba): undefined reference to `virJSONValueGetType'
./.libs/libvirt_driver_qemu_impl.a(libvirt_driver_qemu_impl_la-qemu_monitor_json.o): In function `qemuMonitorJSONGetCPUDefinitions':
qemu_monitor_json.c:(.text+0xb76f): undefined reference to `virJSONValueGetType'
./.libs/libvirt_driver_qemu_impl.a(libvirt_driver_qemu_impl_la-qemu_monitor_json.o): In function `qemuMonitorJSONGetMigrationCapabilities':
qemu_monitor_json.c:(.text+0xdf4c): undefined reference to `virJSONValueGetType'
./.libs/libvirt_driver_qemu_impl.a(libvirt_driver_qemu_impl_la-qemu_monitor_json.o): In function `qemuMonitorJSONGetGICCapabilities':
qemu_monitor_json.c:(.text+0xe2cc): undefined reference to `virJSONValueGetType'
collect2: error: ld returned 1 exit status
libtool:   error: error: relink '' with the above command before installing it

Always similar linking and libtool errors. Many of them. Didn’t seem to matter what USE flags I had, or if I rebuilt any of the dependencies. A bit frustrating.

The eventual fix was to remove libvirt and reinstall it. I’m still not sure what would cause this, other than the ebuild somehow using old versions of the file from the previous build, but that shouldn’t be possible. I figured it out by emerging libvirt on another box that didn’t already have it installed, and then it all started to work.

# emerge -C app-emulation/libvirt dev-python/libvirt-python
# emerge -av app-emulation/libvirt dev-python/libvirt-python

And then I was back in business.

ALSA volume only responds to Wave Surround

December 27th, 2017

This was a bit dumb. A couple of days after replacing a drive in my main Linux box, I could no longer control the Master and PCM volumes. Changing them did nothing. Only Wave Surround had any effect on the volume at all.

Stupidly, it turns out I’d connected my sound output to the surround output on the card. Drove me crazy for a bit until I remembered I’d reconnected everything a few days ago.

So anyway, if Wave Surround is your only volume control under ALSA, check your cables.

Recreating Google Authenticator 2FA virtual tokens

November 14th, 2017

Once in awhile I need to register a second factor token on another device or recreate it on a device where it already existed. I have this Android tablet that has crashed again (reset itself to factory defaults.) And without fail, I have to look up how I recreate the MFA tokens again, and that’s just a waste of time.

So, I use Google Authenticator on Android for most but not all of my 2FA tokens. I wanted to create a QR code rather than type in the codes again. The ones for AWS are 64 characters long, and that’s going to be error prone. You can create a code using qrencode from the command line, so all I needed to know was what the format was for the TOTP URL that Google Authenticator expects.

It looks like this:


USERID is an identifier specific to the user. The CODE is naturally the character string of letters and numbers that are used along with the time to generate the displayed codes for TOTP. My codes are typically 16 characters or more. ISSUER is another identifier that is displayed, usually who it’s for. AWS for example.

So you’ll end up with a URL like:


And then you display it as a QR code (in a suitable terminal… I’ve tried xterm, xfce4-terminal, and mate-terminal for this):

qrencode -tANSI256 'otpauth://totp/'

Generated an entire list of these with:

xargs -n 1 -ITOTP qrencode -tANSI256 TOTP < otp.urls

Scan one at a time, and you’re in business.

One thing that I will mention, because I think it’s important to be explicit about this: don’t just leave your OTP token codes in an unencrypted file waiting to be pilfered. It’s common sense that these should be protected over and above your normal data, and preferably kept separate from something like KeePass or whatever you store your passwords in, if you store them. Be smart about it.

One more thing I did discover while writing this is that not all randomly generated codes are equal. Some of the longer ones I generated to test it gave me “Key not recognized”. I’ve never seen that, but perhaps I just have an error in the URL that I was unable to see. I’ll update this if I figure it out.

Building an OpenIndiana NFS cluster on 151a9 and Hipster

January 8th, 2017

I’ve been working with OpenIndiana at work. We wanted to make an NFS cluster that allowed us to take advantage of ZFS features, similar to ones I’d produced before in the past with Veritas Cluster (on Linux and pre-5.10 Solaris) with VxVM and VxFS, and with the Linux clustering software. Our idea was that we could replace some other more expensive file serving appliances for the cost of a few SAN-connected server boxes.

In order to build a cluster using ZFS, we needed something to manage resources and quorum and fencing. At the time we started, there wasn’t yet a lot of confidence in ZFS on Linux and we had no real experience with FreeBSD (I’ve played with it since in Virtualbox and I really like it but it hasn’t become my daily driver). So I started looking at Solaris. I have a lot of time with Solaris (all the way back to SunOS 5.0 and the earlier SunOS 4.1.x was what I worked on at my first job, in fact), and it works very well for me.

So, the question became what clustering software is available with OpenSolaris variants like OmniOS and OpenIndiana? I remember early on there was a project for OpenSolaris to get some of the Solaris HA software (OHAC) to work, but I guess that hasn’t gone anywhere in a few years. I think any progress on it disappeared with Oracle’s destruction of Even if it is available, I’ve never used it before.

I did, however, find that someone had already done the serious work of getting Pacemaker to work with newer SunOS releases like OpenIndiana. The site where I found this no longer exists as far as I can tell. I had to make a few minor changes in order to get this to compile and run, but for the most part it was ready to go. Perhaps we were running slightly different builds, but I suspect that they just weren’t using the same OCF resource agents that I was (most of the changes that I needed to make were there.) Beyond that, it was mostly a case of writing a resource agent that could understand zpools and SCSI-3 persistent reservations. And testing, testing, testing by hand in a lab environment.

It’s still not perfect, but it does work. There are several areas that could stand some improvement. The zpool resource agent that I wrote does allow for the migration and failover of zpools between hosts, but acquiring the scsi reservations could be done in parallel across the devices. As of this writing we acquire the reservations one at a time until we have every device in the pool. Additionally, I need to add resource agents for COMSTAR iSCSI, and I’d like to set it up to work with CIFS. So far we’ve only used this effectively with NFS. Also, a lot of the resource agents outside of the zpool, IPaddr, MailTo OCF resource agents are untested in our environment. I’ve found that the few that I tried only seemed to work when we explicitly changed them to use /bin/bash, but I haven’t ever gone back to find out what specific bashism is causing /bin/sh (ksh93 on OpenIndiana) to fail. I looked at it when we first started this in 2013 but the /bin/bash change fix was just easy and there’s no reason not to use bash here.

So, first you will need an OpenIndiana box capable of compiling the packages that are necessary. Glue, Agents, Heartbeat, Pacemaker, and a few cluster tools. I installed all these in /opt/ha because it was convenient and kept the software separate from the operating system. I also have a separate “Tools” repository that contains the zpool resource agent and some scripts I’ve found useful.

The first time I built this, back in 2013 on OI 151a8, I found that I needed to set up a proxy server to get xsltproc to work reliably to produce the documentation. That no longer appears to be necessary, but originally I used this:

export http_proxy=""

That works around a problem with xsltproc. I’m just mentioning it here for reference. The documentation production changed considerably with the newer releases.

You can either build the packages on each host, or you can build IPS or SVR4 packages and install those on each box in the cluster. The original lab cluster was built and installed locally, naturally. I will show you what I did in order to end up with IPS packages for our various NFS clusters. It includes the build, install on the build host, and also install in temporary directories that can be used to create packages.

Also, before I started the build, I created a user and group to own the software both called “cluster”.

We start with the cluster glue package:

git clone
cd glue
./configure --prefix=/opt/ha --enable-fatal-warnings=no --enable-ansi=no --with-daemon-group=cluster --with-daemon-user=cluster
sudo make install
sudo rm -rf /tmp/glue
sudo mkdir /tmp/glue
sudo env DESTDIR=/tmp/glue make install

To build the agents package:

cd ..
git clone
cd agents
./configure --prefix=/opt/ha --enable-fatal-warnings=no --enable-ansi=no --with-ocf-root=/opt/ha/lib/ocf
sudo make install
sudo rm -rf /tmp/agents
sudo mkdir /tmp/agents
sudo env DESTDIR=/tmp/agents make install

Next, we build Heartbeat:

cd ..
git clone
cd heartbeat
./configure --prefix=/opt/ha --enable-fatal-warnings=no --enable-ansi=no
sudo make install
sudo rm -rf /tmp/heartbeat
sudo mkdir /tmp/heartbeat
sudo env DESTDIR=/tmp/heartbeat make install

Lastly, we build Pacemaker:

cd ..
git clone
cd pacemaker
./configure --prefix=/opt/ha --enable-fatal-warnings=no --enable-ansi=no --with-heartbeat
sudo make install
sudo rm -rf /tmp/pacemaker
sudo mkdir /tmp/pacemaker
sudo env DESTDIR=/tmp/pacemaker make install

Next we build packages. We already have a local package repository,, which makes
this easy. Here’s a script that I use to package up the already built sources that we have installed in /tmp
using $DESTDIR:

rm -f /tmp/glue.p5m.*
rm -f /tmp/agents.p5m.*
rm -f /tmp/heartbeat.p5m.*
rm -f /tmp/pacemaker.p5m.*
rm -f /tmp/tools.p5m.*

pkgsend generate /tmp/glue > /tmp/glue.p5m.1
pkgsend generate /tmp/agents > /tmp/agents.p5m.1
pkgsend generate /tmp/heartbeat > /tmp/heartbeat.p5m.1
pkgsend generate /tmp/pacemaker > /tmp/pacemaker.p5m.1
pkgsend generate /tmp/tools > /tmp/tools.p5m.1

pkgmogrify -v glue.mogrify /tmp/glue.p5m.1 | pkgfmt > /tmp/glue.p5m.2
pkgmogrify -v agents.mogrify /tmp/agents.p5m.1 | pkgfmt > /tmp/agents.p5m.2
pkgmogrify -v heartbeat.mogrify /tmp/heartbeat.p5m.1 | pkgfmt > /tmp/heartbeat.p5m.2
pkgmogrify -v pacemaker.mogrify /tmp/pacemaker.p5m.1 | pkgfmt > /tmp/pacemaker.p5m.2
pkgmogrify -v tools.mogrify /tmp/tools.p5m.1 | pkgfmt > /tmp/pacemaker.p5m.2

pkgdepend generate -md /tmp/glue /tmp/glue.p5m.2 | pkgfmt > /tmp/glue.p5m.3
pkgdepend generate -md /tmp/agents /tmp/agents.p5m.2 | pkgfmt > /tmp/agents.p5m.3
pkgdepend generate -md /tmp/heartbeat /tmp/heartbeat.p5m.2 | pkgfmt > /tmp/heartbeat.p5m.3
pkgdepend generate -md /tmp/pacemaker /tmp/pacemaker.p5m.2 | pkgfmt > /tmp/pacemaker.p5m.3
pkgdepend generate -md /tmp/tools /tmp/tools.p5m.2 | pkgfmt > /tmp/tools.p5m.3

pkgdepend resolve -m /tmp/glue.p5m.3
pkgdepend resolve -m /tmp/agents.p5m.3
pkgdepend resolve -m /tmp/heartbeat.p5m.3
pkgdepend resolve -m /tmp/pacemaker.p5m.3
pkgdepend resolve -m /tmp/tools.p5m.3

pkgsend publish -s -d /tmp/glue /tmp/glue.p5m.3.res
pkgsend publish -s -d /tmp/agents /tmp/agents.p5m.3.res
pkgsend publish -s -d /tmp/heartbeat /tmp/heartbeat.p5m.3.res
pkgsend publish -s -d /tmp/pacemaker /tmp/pacemaker.p5m.3.res
pkgsend publish -s -d /tmp/tools /tmp/tools.p5m.3.res

At the end of this you will have 5 packages that you can install on your OpenIndiana 151a9 box. After that, you will need to set up the cluster.

I’ll have to put the details of that in a separate post.

OpenIndiana 10Gb Ethernet performance

June 2nd, 2016

I’ve really only used the Intel ixgbe 10 Gb cards (up to this point) with Solaris (or Linux for that matter). I discovered the source of a long-standing problem we were having a couple of months ago, and was extremely disappointed in myself, that it took me so long to figure out the root cause. I had some of the clues, but I only put it together earlier this year.

Using a vnic or vlan device will kill your network performance at 10 Gb speeds.

It mostly comes down to the use of hardware rings on the card. If you have a vnic installed above your physical or link-aggregate interface and your card has a number of hardware rings, you will see something like this:

# dlstat show-link
           LINK  TYPE      ID  INDEX     PKTS    BYTES
          aggr1    rx   local     --        0        0
          aggr1    rx   bcast     --    1.74M  104.35M
          aggr1    rx      sw     --  118.75M   48.20G
          aggr1    tx   bcast     --    1.83M   76.95M
          aggr1    tx      hw      0   22.68M    5.74G
          aggr1    tx      hw      1   22.65M    5.77G
          aggr1    tx      hw      2  830.57K  225.81M
          aggr1    tx      hw      3  226.90K  171.57M
          aggr1    tx      hw      4    1.20M  197.86M
          aggr1    tx      hw      5   21.25M    6.41G
          aggr1    tx      hw      6  989.00K  598.06M
          aggr1    tx      hw      7  606.00K  368.05M
          aggr1    tx      hw      8    1.09M  849.04M
           nfs1    rx   local     --        0        0
           nfs1    rx   bcast     --  939.90K   56.39M
           nfs1    rx      sw     --  135.70G  378.82T
           nfs1    tx   bcast     --  604.81K   27.82M
           nfs1    tx      hw      0  375.13M  531.59G
           nfs1    tx      hw      1  379.57M  540.94G
           nfs1    tx      hw      2    1.25G  921.20G
           nfs1    tx      hw      3  229.59M  331.10G
           nfs1    tx      hw      4  125.08M  172.25G
           nfs1    tx      hw      5  266.42M  375.07G
           nfs1    tx      hw      6    7.21G    8.74T
           nfs1    tx      hw      7    8.24G    1.70T
           nfs1    tx      hw      8  786.11M    1.09T

So, you can see that I removed some of the transmission rings from the output because I put so many of them in ixgbe.conf (probably way more than is required or recommended). (These boxes are HP DL360 Gen 8.)
Most importantly, you can see that I have basically a single software receive (rx) ring for both aggr1 and nfs1. nfs1 existed in order to put NFS traffic on a separate VLAN. Not my idea, just something they decided to do.

If you look in the illumos or openindiana code long enough, you can determine that the vnic is the root cause of that. Once it’s removed, the same command will output something like this:

# dlstat show-link
           LINK  TYPE      ID  INDEX     PKTS    BYTES
          aggr1    rx   local     --        0        0
          aggr1    rx   bcast     --        0        0
          aggr1    rx      hw      0   13.70M   65.01G
          aggr1    rx      hw      1   10.84M   57.96G
          aggr1    rx      hw      2   12.64M   64.92G
          aggr1    rx      hw      3    8.79M   46.62G
          aggr1    rx      hw      4    8.78M   46.61G
          aggr1    rx      hw      5   11.24M   57.29G
          aggr1    rx      hw      6   10.84M   57.96G
          aggr1    rx      hw      7   84.71M  311.88G
          aggr1    rx      hw      8   21.28M   90.69G
          aggr1    rx      hw      9   10.57M   53.57G
          aggr1    rx      hw     10    8.80M   46.62G
          aggr1    rx      hw     11   12.67M   64.92G
          aggr1    rx      hw     12   14.06M   59.05G
          aggr1    rx      hw     13    8.79M   46.62G
          aggr1    rx      hw     14    1.97M    7.71G
          aggr1    rx      hw     15   22.24M  101.22G
          aggr1    tx   bcast     --        0        0
          aggr1    tx      hw      0    7.23M  763.33M
          aggr1    tx      hw      1    6.74M    7.56G
          aggr1    tx      hw      2    1.73M  463.86M
          aggr1    tx      hw      3  420.66K   80.03M
          aggr1    tx      hw      4  392.61K   74.03M
          aggr1    tx      hw      5    5.83M  543.86M
          aggr1    tx      hw      6    5.65M    7.46G
          aggr1    tx      hw      7    7.45M   14.39G
          aggr1    tx      hw      8    7.24M    7.62G
          aggr1    tx      hw      9    6.20M    7.46G
          aggr1    tx      hw     10    1.29M  380.93M
          aggr1    tx      hw     11    2.61M  220.95M
          aggr1    tx      hw     12   53.14M   17.73G
          aggr1    tx      hw     13  876.49K  108.12M
          aggr1    tx      hw     14    6.81M    1.50G
          aggr1    tx      hw     15    6.27M    1.43G

OK, so nfs1 has been removed, and now you can see rx and tx lanes/hardware rings visible. If the box is also configured correctly, to continue processing packets on the CPU that initially serviced the interrupt (ip:ip_squeue_fanout? ip_tcp_squeue_wput = 2?), you can see a radical increase in performance.

Here is the box before the change:

# iperf -s -i 1
Server listening on TCP port 5001
TCP window size: 2.00 MByte (default)
[  4] local x.x.x.x port 5001 connected with x.x.x.x port 53698
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0- 1.0 sec   353 MBytes  2.96 Gbits/sec
[  4]  1.0- 2.0 sec   499 MBytes  4.19 Gbits/sec
[  4]  2.0- 3.0 sec   500 MBytes  4.19 Gbits/sec
[  4]  3.0- 4.0 sec   504 MBytes  4.23 Gbits/sec
[  4]  4.0- 5.0 sec   500 MBytes  4.20 Gbits/sec
[  4]  5.0- 6.0 sec   500 MBytes  4.19 Gbits/sec
[  4]  6.0- 7.0 sec   487 MBytes  4.08 Gbits/sec
[  4]  7.0- 8.0 sec   492 MBytes  4.13 Gbits/sec
[  4]  8.0- 9.0 sec   498 MBytes  4.17 Gbits/sec
[  4]  9.0-10.0 sec   502 MBytes  4.21 Gbits/sec
[  4]  0.0-10.0 sec  4.72 GBytes  4.05 Gbits/sec

And here is the after:

[  5] local x.x.x.x port 5001 connected with x.x.x.x port 58292
[  5]  0.0- 1.0 sec   888 MBytes  7.45 Gbits/sec
[  5]  1.0- 2.0 sec  1.09 GBytes  9.40 Gbits/sec
[  5]  2.0- 3.0 sec  1.09 GBytes  9.40 Gbits/sec
[  5]  3.0- 4.0 sec  1.09 GBytes  9.40 Gbits/sec
[  5]  4.0- 5.0 sec  1.09 GBytes  9.40 Gbits/sec
[  5]  5.0- 6.0 sec  1.09 GBytes  9.39 Gbits/sec
[  5]  6.0- 7.0 sec  1.09 GBytes  9.40 Gbits/sec
[  5]  7.0- 8.0 sec  1.09 GBytes  9.40 Gbits/sec
[  5]  8.0- 9.0 sec  1.09 GBytes  9.40 Gbits/sec
[  5]  9.0-10.0 sec  1.07 GBytes  9.17 Gbits/sec
[  5]  0.0-10.0 sec  10.7 GBytes  9.14 Gbits/sec

It’s not always quite that fast for a single stream, which I suspect has more to do with which CPUs are handling the interrupts. I have a bit more tuning to do with this test cluster. So I still have some minor work here. With the LACP aggregate (2x10Gb ixgbe 82599 built into the motherboard), I have seen 18.9 Gb/s with multiple iperf connections.

In case someone is curious, I’m using OpenIndiana hipster on HP DL360 G8s. 24 cores, 128 GB of RAM, 2 onboard 82599 ixgbe ethernet, and two Emulex boards with 8 Gb optics for the SAN attachment to Brocade switches. The storage is driven by an EMC Vplex (VS2), but we do use some direct attachment to flash arrays for ZILs (to get the absolute minimum latency.) The clustering I’ll detail in another post.

OpenIndiana hipster 2014.1

July 10th, 2014

I haven’t been paying close attention the last few weeks and was surprised to discover that hipster stopped updating. So, I finally had time to go read the mailing list and discovered that they’d moved to a hipster-2014.1 repository that contains just the most recent packages.

Needed to set up my boxes at work to mirror this repository, so I created a new local mirror:

# mkdir /data/pkg/hipster-2014.1
# pkgrepo create /data/pkg/hipster-2014.1
# pkgrecv -s -d /data/pkg/hipster-2014.1 '*'
# svccfg -s pkg/server add hipster-2014-1
# svccfg -s pkg/server:hipster-2014-1 addpg pkg application
# svccfg -s pkg/server:hipster-2014-1 setprop pkg/port=10087
# svccfg -s pkg/server:hipster-2014-1 setprop pkg/inst_root=/data/pkg/hipster-2014.1
# svccfg -s pkg/server:hipster-2014-1 addpg general framework
# svccfg -s pkg/server:hipster-2014-1 addpropvalue general/complete astring: hipster-2014.1
# svccfg -s pkg/server:hipster-2014-1 addpropvalue general/enabled boolean: true
# svccfg -s pkg/server:hipster-2014-1 setprop pkg/readonly=true
# svccfg -s pkg/server:hipster-2014-1 setprop pkg/threads=100
# svcadm refresh application/pkg/server:hipster-2014-1
# svcadm enable application/pkg/server:hipster-2014-1

Then, I have to fix apache on the box by editing proxy.conf:

# vi /etc/apache2/2.2/conf.d/proxy.conf

I add the following line:

ProxyPass /hipster-2014.1 nocanon

Making the file look like:

ProxyRequests Off
ProxyVia Block
ProxyStatus On
ProxyPreserveHost Off
ProxyPass /dev nocanon
ProxyPass /sfe nocanon
ProxyPass /sfe-encumbered nocanon
ProxyPass /local nocanon
ProxyPass /legacy nocanon
ProxyPass /hipster nocanon
ProxyPass /hipster-2014.1 nocanon
AllowEncodedSlashes NoDecode

Restart apache, set the publisher, and update:

# svcadm restart svc:/network/http:apache22
# pkg set-publisher -p
pkg set-publisher:
  Updated publisher(s):
# pkg set-publisher -p
pkg set-publisher:
  Updated publisher(s):
# pkg image-update
           Packages to install:   3
            Packages to update: 487
           Mediators to change:   2
       Create boot environment: Yes
Create backup boot environment:  No

DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                            490/490     7415/7415  674.6/674.6 13.5M/s

PHASE                                          ITEMS
Removing old actions                       2682/2682
Installing new actions                     3920/3920
Updating modified actions                10263/10263
Updating package state database                 Done
Updating package cache                       487/487
Updating image state                            Done
Creating fast lookup database                   Done
Reading search index                            Done
Building new search index                  1236/1236

A clone of openindiana-41 exists and has been updated and activated.
On the next boot the Boot Environment openindiana-42 will be
mounted on '/'.  Reboot when ready to switch to this updated BE.

NOTE: Please review release notes posted at:

And reboot, and then you have a new hipster box.

Building GNU Screen 4.2.1 on OpenIndiana

April 30th, 2014

Surprisingly, a new version of GNU screen (4.2.1) was released recently, so I thought I’d build it on my OpenIndiana (hipster) box to see if worked, and ran into a bit of a problem:

gcc -c -I. -I.  -DETCSCREENRC='"/usr/local/etc/screenrc"' -DSCREENENCODINGS='"/usr/local/share/screen/utf8encodings"' -DHAVE_CONFIG_H -DGIT_REV=""`git describe --always 2>/dev/null`"" \
     -g -O2 socket.c
socket.c: In function 'ReceiveMsg':
socket.c:990:16: warning: assignment from incompatible pointer type [enabled by default]
socket.c:994:6: error: 'struct msghdr' has no member named 'msg_controllen'
socket.c:995:6: error: 'struct msghdr' has no member named 'msg_control'
socket.c:1007:14: error: 'struct msghdr' has no member named 'msg_controllen'
socket.c:1010:14: warning: assignment makes pointer from integer without a cast [enabled by default]
socket.c:1010:48: warning: assignment makes pointer from integer without a cast [enabled by default]
socket.c: In function 'SendAttachMsg':
socket.c:1801:6: error: 'struct msghdr' has no member named 'msg_control'
socket.c:1802:6: error: 'struct msghdr' has no member named 'msg_controllen'
socket.c:1803:8: warning: assignment makes pointer from integer without a cast [enabled by default]
socket.c:1807:3: warning: passing argument 2 of 'bcopy' makes pointer from integer without a cast [enabled by default]
In file included from os.h:83:0,
                 from screen.h:30,
                 from socket.c:42:
/usr/include/strings.h:46:13: note: expected 'void *' but argument is of type 'int'
socket.c:1808:6: error: 'struct msghdr' has no member named 'msg_controllen'
gmake: *** [socket.o] Error 1

Well, that didn’t work as hoped, but after looking at the source and doing a quick search, I realized I could get it to build (and work, based on my limited testing), using:

CFLAGS="-D_XOPEN_SOURCE -D_XOPEN_SOURCE_EXTENDED=1 -D__EXTENSIONS__" ./configure --prefix=/usr/local

And then building with gmake, it works just fine. Looks like the project is alive again.

HBO Go on Linux

September 30th, 2013

HBO Go stopped working for me on Linux recently (actually it might not have been super recent, I don’t use it that often). Apparently a DRM problem with Flash 11.2.x on Gentoo (and other Linux versions I’m assuming), I was able to get it working again by emerging “media-libs/hal-flash” (version 0.2.0_rc1 I assume). I have no idea why HAL is “deprecated”.

The move away from Flash is going to eventually be OK, since it came with its own issues, particularly on 64 bit Linux. In the short term it has caused me a lot of problems

Setting up repository mirrors for local use

August 29th, 2013

I have a few OpenIndiana servers now, at home and at work. I’ve been working hard on clustering lately. To help with my work, in both places I’m busy setting up a local repository, something I can update periodically and then use it to update all my servers and containers. I’ve had a lot of success with a local http-replicator cache for Gentoo, and I hope this will prove useful as well. I’m not all that familiar with IPS anyway, so anything I can learn here would be useful. I sometimes miss the simplicity of SVR4 packages but they never added the features the other distributions were busy pioneering.

Based on some documentation I’ve been reading, to create the repository on my local machine, in a data zpool that I’d already created:

mkdir /data/pkg/dev
pkgrepo create /data/pkg/dev
pkgrecv -s -d /data/pkg/dev '*'
pkgrepo rebuild -s /data/pkg/dev

To set up the dev pkg.depotd server:

svccfg -s pkg/server add dev
svccfg -s pkg/server:dev addpg pkg application
svccfg -s pkg/server:dev setprop pkg/port=10081
svccfg -s pkg/server:dev setprop pkg/inst_root=/data/pkg/dev
svccfg -s pkg/server:dev addpg general framework
svccfg -s pkg/server:dev addpropvalue general/complete astring: dev
svccfg -s pkg/server:dev addpropvalue general/enabled boolean: true
svccfg -s pkg/server:dev setprop pkg/readonly=true
svccfg -s pkg/server:dev setprop pkg/threads=100

Some of these commands are borrowed from Solaris 11 instructions. I still need to look up some of these properties and find out if they are documented. Before today I had only seen the pkg/port, pkg/inst_root and pkg/readonly properties.

To enable it:

svcadm refresh application/pkg/server:dev
svcadm enable application/pkg/server:dev

After all the depot servers are up and running, I added this to the /etc/apache2/2.2/conf/proxy.conf file:

ProxyRequests Off
ProxyVia Block
ProxyStatus On
ProxyPreserveHost Off
ProxyPass /dev nocanon
ProxyPass /sfe nocanon
ProxyPass /sfe-encumbered nocanon
ProxyPass /local nocanon
ProxyPass /legacy nocanon
AllowEncodedSlashes NoDecode

After that is done, you need to set up the clients to use it. To replace the publisher:

pkg set-publisher -G '*' -g
pkg refresh --full

After that, I was able to do ‘pkg update’ as expected. I was able to mirror dev, sfe, sfe-encumbered and legacy without much trouble.

Now that I have that, I’ve created a local repository for development. This will be fun. I am wondering how to feed back some packages to the community. I should probably ask.