an attempt at documentation of my ongoing struggles with solaris and opensolaris on x86. I believe strongly in the (public) documentation of trials, struggles and failures, even more so than in the documentation of success: With a long standing commitment to solaris, looking for answers and just finding "I tried it on distibution 'X' and it worked for me" is not very informative.

Sunday, March 29, 2009

building enblend: another step closer to hugin?

Don't know why I just don't give up. I *need* to get my photo stuff into hugin I suppose.

So, having a go at building enblend; yet another dependency for hugin.

Downloaded and

#./configureLink
[..]
configure: error: libxmi is required to compile enblend.

so. Download libxmi and see what that brings us:
#./configure
# make
# make install

And works! Kudos to the developers of both libxmi and opensolaris... clean, portable code and a build environment that's starting to make sense of linux stuff!

Another go at building enblend (with LDFLAGS=-L /usr/local/lib) now fails for sake of boost headers missing. Which is not too bad doing this now, as hugin lists it as a requirement as well.

Let's have a go at those.

# ./configure
passed!

# make

[ ... quite a few errors relating to python stuff, but I'll leave it at that for now; just hoping python isn't used down the road ...]
#make install
works as well!

update 30/03: Managed to get ./configure to complete. However, make struggles. I filed an incident report at the good soureforge-folks of the project team:

See http://solaristerror.blogspot.com/2009/01/pidgin-facebook-plugins-and-opensolaris.html for details.

adhoc wireless, dhcp-serving and nat'ing with opensolaris

Lately, I've been in spots with multiple laptops and only wired-internet.

Time to fix that I'd say and take control: create a wireless ad-hoc network, with my laptop as the initiator and have the other laptops connect through me.

creating a wireless ad-hoc network is remarkably simple:

in preparation, I first disabled nwam (as my usual mode of operation)

#svcadm disable nwam
#svcadm enable physical:default

Create an ad-hoc wireless network with SSID "twitternet" (check first that the SSID doesn't exist yet):

# dladm connect-wifi -e twitternet -c -b ibss iwk0

Next plumb the TCP/IP part of it.

Not needed yet, but useful for DHCP is to add an entry for 192.168.0.0 to /etc/inet/netmasks.

# ifconfig iwk0 plumb
# ifconfig iwk0 192.168.0.1 netmask + broadcast + up

So, now an adhoc wireless network has been created.

In order to setup a DHCP server and provide some automated network services for generic clients:

Ok, I "cheated", and used the GUI to do the initial configuration of the macros etcetera: /usr/sadm/admin/bin/dhcpmgr. You should be able to achieve the same with:

# dhcpconfig -D -r SUNWfiles -p /var/dhcp -h files

dhcpconfig/dhcpmgr will setup with an ascii-file database in /var/dhcp and update /etc/hosts with the names. It'll also create some macros with as much stuff as it can find out about your network. (check the man page for dhcpconfig; it has a nice table).

With the dhcpmgr GUI, create a network (192.168.0.0 in my case) and set it up to your liking (no-nonse config, I just assigned a static router: 192.168.0.1 and in the end the network macro. I'll look into using RIP another time).

This can ofcourse also be done with pntadm.

As the network macro did not include the external DNS server, and I have not setup my laptop to run one, I checked the localhost macro, and noticed that it nicely implemented the DnsDomain and DnsServer options, pre-populated with the entries from /etc/resolv.conf. (that's why it was useful to put an entry into /etc/inet/netmasks) Nice!

As however the 192.168.0.0 network had the 192.168.0.0 macro assigned to it, the clients would not pick up the DnS* options. I opened the properties for the 192.168.0.0 macro and added a field "Include" with value "localhost"; Now the localhost macro is also provided to the clients!

All done with dhcp-server configuration (a 2 minute job really)

In order to prevent the dhcp-server from resonding to DHCP requests on all interfaces, I added a line to /etc/inet/dhcpsvc.conf:

INTERFACES=iwk0

As a final check:

1) is the wireless adhoc bit setup?
# dladm show-wifi
LINK STATUS ESSID SEC STRENGTH MODE SPEED
iwk0 connected twitternet none good b 54Mb
#

is iwk0 up and does the ipaddress match the router-ipaddress you're handing out?

# ifconfig iwk0
iwk0: flags=201100843 mtu 1500 index 2
inet 192.168.0.1 netmask ffffff00 broadcast 192.168.0.255
ether 0:1d:e0:19:e9:25
#

# dhtadm -P
Name Type Value
==================================================
192.168.0.0 Macro :Subnet=255.255.255.0:Router=192.168.0.1:Broadcst=192.168.0.255:Include=localhost:
localhost Macro :Include=Locale:Timeserv=127.0.0.1:LeaseTim=86400:LeaseNeg:DNSdmain="local":DNSserv=XX.YY.40.25 XX.YY.35.25:
Locale Macro :UTCoffst=3600:
#

next connect the network cable to my wired-network port and

# ifconfig e1000g0 dhcp start
# ifconfig e1000g0
e1000g0: flags=201104843 mtu 1500 index 3
inet XX.YY.27.76 netmask fffff800 broadcast XX.YY.31.255
ether 0:a0:d1:a2:f1:e3


Almost done!

run routeadm and make sure ipv4 routing and forwarwarding are enabled. (Usually, forwarding is disabled. enable with "routeadm -e ipv4-forwarding ; routeadm -u" where needed.

One more last hurdle: you'll need NAT on your wired port to have the clients utilise the single ip-address you've been given:

enable ipfilter (in a blank configuration like mine, ipfilter requires the network/physical:default service, so that's why it needs to be enabled).

# svcadm enable ipfilter

Check that you don't have any NAT mappings yet:
# ipnat -l
and if needed: clean them out with:
# ipnat -C ; ipnat -F

Next, setup a nat mapping:
# echo "map e1000g0 192.168.0.0/24 -> 0.0.0.0/32" | ipnat -f -

Enable the dhcp server:
# svcadm enable dhcp-server

do a final check:
# svcs -x

If everything's running (i.e. ipfilter, dhcp-server mainly) you should be ready to serve the rest of the room.

Ofcourse, you'll need additional (reverse) port mappings if your clients would like to perform server tasks, but that's beyond this scope (besides, my clients sofar were pretty bog standard windows users and laptops, so no fancy stuff needed).


Wednesday, February 25, 2009

building mono 2.2 from scratch on opensolaris

Ah the sweet taste of succes makes me long for more...

I had another look at hugin and remembered I'd need mono to get hugin (or one of it's dependencies) to work on my opensolaris laptop.

So once more I venture into the trenches and downloaded the mono sources.

I left my symlinks to ac_local and automake from a previous exercise in place, promoted /usr/sfw/bin to be the first in the $PATH, symlinked /usr/sfw/bin/ggrep to /usr/sfw/bin/grep to fix an initial grep error in a first pass of ./configure and:

#./configure passed!
I did see some grep errors, and with assuming that ggrep would fix that, I symlinked grep to grep in /usr/sfw/bin.

# make
crashed and burned with errors complaining about dtrace:

make[3]: Entering directory `/root/mono-2.2/mono/utils'
/usr/sbin/dtrace -32 -s ../../data/mono.d -o mono-dtrace.h
dtrace: script '../../data/mono.d' matched 0 probes
dtrace: no probes matched
make[3]: *** [mono-dtrace.h] Error 1
make[3]: Leaving directory `/root/mono-2.2/mono/utils'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/root/mono-2.2/mono'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/root/mono-2.2'
make: *** [all] Error 2

#./configure --enable-dtrace=true
fixed that!

(reading the ./configure --help I realise i should've used --enable-dtrace=yes, so I'm assuming dtrace support isn't enabled...)

I symlinked /usr/sfw/bin/gmake to /usr/sfw/bin/make and

# make
crashed on some u_int32_t errors:

googleing for this I found a solution: replace u_int32_t with uint32_t and try again. Seems most of them were ironed out before, except for three entries in

In file included from ../../mono/utils/freebsd-elf32.h:32,
from aot-compiler.c:67:
../../mono/utils/freebsd-elf_common.h:46: error: syntax error before "u_int32_t"
../../mono/utils/freebsd-elf_common.h:46: warning: no semicolon at end of struct or union
../../mono/utils/freebsd-elf_common.h:47: warning: type defaults to `int' in declaration of `n_descsz'
../../mono/utils/freebsd-elf_common.h:47: warning: data definition has no type or storage class
../../mono/utils/freebsd-elf_common.h:48: error: syntax error before "n_type"
../../mono/utils/freebsd-elf_common.h:48: warning: type defaults to `int' in declaration of `n_type'
../../mono/utils/freebsd-elf_common.h:48: warning: data definition has no type or storage class
../../mono/utils/freebsd-elf_common.h:49: warning: type defaults to `int' in declaration of `Elf_Note'
../../mono/utils/freebsd-elf_common.h:49: warning: data definition has no type or storage class
In file included from aot-compiler.c:67:
../../mono/utils/freebsd-elf32.h:144: error: syntax error before "Elf32_Nhdr"
../../mono/utils/freebsd-elf32.h:144: warning: type defaults to `int' in declaration of `Elf32_Nhdr'
../../mono/utils/freebsd-elf32.h:144: warning: data definition has no type or storage class
In file included from aot-compiler.c:68:
../../mono/utils/freebsd-elf64.h:162: error: syntax error before "Elf64_Nhdr"
../../mono/utils/freebsd-elf64.h:162: warning: type defaults to `int' in declaration of `Elf64_Nhdr'
../../mono/utils/freebsd-elf64.h:162: warning: data definition has no type or storage class
make[4]: *** [aot-compiler.lo] Error 1
make[4]: Leaving directory `/root/mono-2.2/mono/mini'
make[3]: *** [all] Error 2
make[3]: Leaving directory `/root/mono-2.2/mono/mini'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/root/mono-2.2/mono'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/root/mono-2.2'
make: *** [all] Error 2

fixed that by changing the u_int32_t's to uint32_t's. and

# make
again.

now:
if test -w ../mcs; then :; else chmod -R +w ../mcs; fi
cd ../mcs && make NO_DIR_CHECK=1 PROFILES='net_1_1 net_2_0 net_3_5 net_2_1' CC='gcc' all-profiles
make[3]: Entering directory `/root/mono-2.2/mcs'
make profile-do--net_1_1--all profile-do--net_2_0--all profile-do--net_3_5--all profile-do--net_2_1--all
make[4]: Entering directory `/root/mono-2.2/mcs'
make PROFILE=basic all
make[5]: Entering directory `/root/mono-2.2/mcs'
usage: mcs [-cdpVz] [-a string] [-n name] file ...
make[6]: *** [build/deps/basic-profile-check.exe] Error 1
make[6]: Entering directory `/root/mono-2.2/mcs'
*** The compiler 'mcs' doesn't appear to be usable.
*** Trying the 'monolite' directory.
make[7]: Entering directory `/root/mono-2.2/mcs'
build/deps/basic-profile-check.cs(1,1): error CS8025: Parsing error
make[8]: *** [build/deps/basic-profile-check.exe] Error 1
make[8]: Entering directory `/root/mono-2.2/mcs'
*** The contents of your 'monolite' directory may be out-of-date

Ouch. What's that! Checking the readme.. it suggests that I'll have to run "make get-monolite-latest" (make sure gtar is found as tar in your $PATH and move solaris' mcs out of the way),
# make EXTERNAL_MCS=false

This works! (Ok, As I've been poking around a lot with getting this to work, I also installed the monolite-latest libs in /usr/local/lib/mono/1.0, so if the above doesn't work... Install em there).

I've also found that when make or ./configure fails, errors are seldomly reproducible even after make clean. I tend to rm -rf the dir and tar xvf again. This holds true for most of the stuff I build anyways.

anyways.

# make check runs fine as well:

339 test(s) passed. 3 test(s) did not pass.

Failed tests:

appdomain-unload.exe
thunks.exe
bug-459094.exe

Well. That does seem ok for now.

Friday, February 20, 2009

jUploadr for opensolaris

Sometimes (most of the times) using the standard web upload forms of flickr just won't do. Having worked before with jUploadr, I decided to give it another on my pretty recent install of nevada build 105.

There were a couple of odd tricks I had to perform to get it to work though as documented in Michal Pryc's blog. Still it strikes me as odd to have to replace a couple of libraries to get what essentially seems to be a java application to work, but I guess that's the nature of eclipse.

Be it as it is, jUploadr works like a charm!

dvd ripping for opensolaris with jRipper (I wish)

I more and more need the capability of being able to rip dvds (specifically the audio tracks). It's absolutely wonderful listening to some of those excellent live performances recorded on dvd.

In case you're wondering, my favorites have been cosi fan tutte and now since today diana krall's "live in paris" dvd dropped on my doormat. Unfortunately, I've got no video in my car and besides, it'd distract me from driving anyways...

remembering my positive experience with jUploadr as documented in a previous post, I found a tool which looks promising: jRipper. That page nicely lists (hopefully) a complete set of dependencies:
Seems I've got my work cut out for me. The times I haven't been lucky have been plenty...

Starting with cdda2wav...
# grep cdda2wav /var/sadm/install/contents
/usr/bin/cdda2wav f none 0555 root bin 997 18657 1228815120 SUNWmkcd
/usr/bin/cdda2wav.bin f none 0555 root bin 282732 54898 1228963666 SUNWmkcd
/usr/share/man/man1/cdda2wav.1 f none 0444 root bin 35235 53199 1228815116 SUNWsfman

Now that looks promising! Just as a sanity check, let's check if all the tools mentioned by the cdrecord site are included: cdrecord, readcd, cdda2wav, mkisofs, isodebug, isodump, isoinfo, isovfy, rscsi are mentioned.

# gawk '/SUNWmkcd/{print $1}' /var/sadm/install/contents /etc /etc/security /etc/security/exec_attr /usr /usr/bin /usr/bin/cdda2wav /usr/bin/cdda2wav.bin /usr/bin/cdrecord /usr/bin/cdrecord.bin /usr/bin/mkisofs /usr/bin/readcd /usr/bin/readcd.bin

Hmm. Well at least some of the tools are included. Where are the iso* tools though? Grep'ing /var/sadm/install/contents suggests they are not present. I'll leave it at this for now, as jRipper suggests it's only using cdda2wav.

Next: lame.
Digging through the contents-file again, I couldn't establish any presence of lame. So, first of the nastier (well. they may be) steps...

Download, and as per the suggestions in the INSTALL file:

# export CC=/usr/sfw/bin/gcc
# ./configure

# make

# make install

Seems to do the trick! (I'll need to look into optimising with a few CFLAGS later...)

To quickly test lame is actually functional, a few quick sanity tests:
# file `which lame`
/usr/local/bin/lame: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, no debugging information available

So dynamically linked. Thus let's check if it can find it's libraries:

# ldd `which lame`
libcurses.so.1 => /lib/libcurses.so.1
libm.so.2 => /lib/libm.so.2
libsocket.so.1 => /lib/libsocket.so.1
libnsl.so.1 => /lib/libnsl.so.1
libc.so.1 => /lib/libc.so.1
libmp.so.2 => /lib/libmp.so.2
libmd.so.1 => /lib/libmd.so.1
libscf.so.1 => /lib/libscf.so.1
libuutil.so.1 => /lib/libuutil.so.1
libgen.so.1 => /lib/libgen.so.1
Seems allright. Should I worry that none of the libs installed by lame in /usr/local/lib are used?

Let's go for a quick field test...

By default, there's a few .wav files in /opt/staroffice8/share/gallery/sounds/ I found:

# cd /opt/staroffice8/share/gallery/sounds/
# ls

apert.wav space3.wav theetone.wav untie.wav

[... output omitted ...]

# lame apert.wav apert.mp3
LAME 3.98.2 32bits (http://www.mp3dev.org/)
Using polyphase lowpass filter, transition band: 8269 Hz - 8535 Hz Encoding apert.wav to apert.mp3 Encoding as 22.05 kHz single-ch MPEG-2 Layer III (11x) 32 kbps qval=3
Frame | CPU time/estim | REAL time/estim | play/CPU | ETA 45/45 (100%)| 0:00/ 0:00| 0:00/ 0:00 | 29.388x | 0:00 ----------------------------------------------------------- kbps mono % long switch short % 32.0 100.0 93.3 4.4 2.2 ReplayGain: -11.5dB #
Works! Even playing the mp3 produces something sensible.


Next on the list: the ogg-vorbis stuff. Last time I checked, rhythmbox (installed by default) extracted a few cd's to .oga, so I'd say, the ogg-vorbis support would be included as well...

Digging through (again) /var/sadm/install/contents:
# grep SUNWogg-vorbis /var/sadm/install/contents | ...
[ ... output omitted ... ]

/usr/lib/libogg.so.0.5.3 f none 0755 root bin 21004 63931 1228487831 SUNWogg-vorbis
/usr/lib/libvorbis.so.0.4.0 f none 0755 root bin 213708 28878 1228487831 SUNWogg-vorbis /usr/lib/libvorbisenc.so.2.0.3 f none 0755 root bin 1120516 56353 1228487831 SUNWogg-vorbis /usr/lib/libvorbisfile.so.3.2.0 f none 0755 root bin 33668 22505 1228487832 SUNWogg-vorbis [ ... output omitted ...] #

So that looks solid as well.

Onto flac then:
# grep SUNWflac$ /var/sadm/install/contents | grep " f "
[ ... output omitted ... ]

/usr/lib/amd64/libFLAC.so.8.2.0 f none 0755 root bin 366336 38583 1228468418 SUNWflac
/usr/lib/libFLAC.so.8.2.0 f none 0755 root bin 261492 22875 1228468419 SUNWflac
[ ... output omitted ... ]
So flac's installed as well! (Geez. going pretty well I'd say!)

Last of the dep's (I hope): faac/faad.

# grep faac /var/sadm/install/contents
# grep faad /var/sadm/install/contents

# find /usr/local -name \*faa\*

#


No such luck.
So. download faad and faac.
# unzip faad2-2.7.zip 2>&1 >/dev/null
# cd faad2-2.7

# cat README.linux

To compile under Linux.

----------------------

just run :
./configure --with-mp4v2
make

sudo make install

about the xmms plugin.

---------------------
The xmms plugin need to be build after the install of the faad project.
so after you have installed correctly faad (--with-xmms options) you need to configure and build the xmms plugin part in the plugins/xmms directory. Read the README and INSTALL files into the xmms directory.
As per usual, it looks pretty easy. Let's do it then!

# ./configure --with-mp4v2
ksh: ./configure: not found

# ls configure

configure: No such file or directory
No Configure!!?! So that's why they mentioned separate "bootstrapped" downloads. Let's see if I can get it to work anyways...
# ./bootstrap
ksh: ./bootstrap: cannot execute

# ls -l bootstrap
-rw-r--r-- 1 root root 333 Jul 27 2004 bootstrap
# chmod u+x bootstrap # ./bootstrap
ksh: ./bootstrap: not found


Stubborn little program! Vi'ing, I see that the first line has a space where I feel it shouldn't be... Let's correct that!

# ./bootstrap ksh: ./bootstrap: not found


Well. Let's fire it up via the shell then:
# sh bootstrap
bootstrap: ^M: not found

bootstrap: syntax error at line 4: `in^M' unexpected


Hmm. Would this be a dos file maybe?
# cat -v bootstrap
#! /bin/sh^M

^M
case $OSTYPE in^M

darwin*)^M
LIBTOOLIZE=glibtoolize^M
;;^M
*)^M
LIBTOOLIZE=libtoolize^M
;;^M
esac^M

^M
aclocal -I . && \^M
autoheader && \^M

$LIBTOOLIZE --automake --copy && \^M
automake --add-missing --copy && \^M
autoconf && \^M

echo "Ready to run ./configure"^M
# cat bootstrap | dos2unix > bootstrap2 ; mv bootstrap2 bootstrap ; chmod u+x bootstrap #


try again
# ./bootstrap
./bootstrap: aclocal: not found

#

This really warrants an $(echo 'f*ck' | sed 's/\*/s/') I'd say. Though I'd hate to give up! aclocal seems not installed indeed on opensolaris. Google reveals a trick that seems reliablyish though:

# ls /usr/bin/aclocal*
/usr/bin/aclocal-1.10 /usr/bin/aclocal-1.9


Let's set the 1.10 version to work:

# ln -s /usr/bin/aclocal-1.10 /usr/bin/aclocal
# which aclocal

/usr/bin/aclocal

# And again...
# ./bootstrap
/usr/share/aclocal/libmikmod.m4:11: warning: underquoted definition of AM_PATH_LIBMIKMOD

/usr/share/aclocal/libmikmod.m4:11: run info '(automake)Extending aclocal'

/usr/share/aclocal/libmikmod.m4:11: or see http://sources.redhat.com/automake/automake.html#Extending-aclocal

./bootstrap: automake: not found
Pfff. Read the blog entry again it says: and similarly for 'automake'.
So fix that as well:

# ls /usr/bin/automake* /usr/bin/automake-1.10 /usr/bin/automake-1.9 # ln -s /usr/bin/automake-1.10 /usr/bin/automake # which automake /usr/bin/automake
If that doesn't fix things...
# ./bootstrap
/usr/share/aclocal/libmikmod.m4:11: warning: underquoted definition of AM_PATH_LIBMIKMOD

/usr/share/aclocal/libmikmod.m4:11: run info '(automake)Extending aclocal'

/usr/share/aclocal/libmikmod.m4:11: or see http://sources.redhat.com/automake/automake.html#Extending-aclocal
configure.in:19: installing `./compile' configure.in:12: installing `./missing' configure.in:12: installing `./install-sh' common/mp4ff/Makefile.am: installing `./depcomp' Makefile.am: installing `./INSTALL' Ready to run ./configure
Well. wonderful! Let's see what she does!
# make
make all-recursive

Making all in libfaad

if /bin/bash ../libtool --tag=CC --mode=compile /usr/sfw/bin/gcc -DHAVE_CONFIG_H -I. -I. -I.. -iquote ../include -g -O2 -MT bits.lo -MD -MP -MF ".deps/bits.Tpo" -c -o bits.lo bits.c; \ then mv -f ".deps/bits.Tpo" ".deps/bits.Plo"; else rm -f ".deps/bits.Tpo"; exit 1; fi mkdir .libs /usr/sfw/bin/gcc -DHAVE_CONFIG_H -I. -I. -I.. -iquote ../include -g -O2 -MT bits.lo -MD -MP -MF .deps/bits.Tpo -c bits.c -fPIC -DPIC -o .libs/bits.o gcc: ../include: linker input file unused because linking not done
cc1: error: unrecognized
command line option "-iquote"
cc1: ../include: No such file or directory
Ouch. Now what? Looks like gcc struggles. Digging through the man pages for gcc reveals no entry for -iquote. Now what? google (ironically: search for "gcc +iquote") tells me this:
  In GCC 3.5, we are considering replacing the -I- command line option
with a new -iquote option.
Ok, so. What gcc version am I running?
# gcc --version gcc (GCC) 3.4.3 (csl-sol210-3_4-20050802) Copyright (C) 2004 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
I tried the older versions of aclocal and automake, different make (gmake) but that still generates the same error...
So, faad (and faac likely) are expecting a more recent version of a compiler... Not too weird considering that development is in the 4.something range.
Let's take it the other way and install the studio 12 suite. I had been planning on doing that anyways... (if not, I'll grab a more recent gcc somewhere else)

So with the following environment:
# env
_=/usr/bin/env
LANG=C
HZ=
PAGER=/usr/bin/less
PATH=/usr/sbin:/usr/bin:/usr/local/bin:/opt/SUNWspro/bin
SHELL=/usr/bin/ksh
HOME=/root
TERM=xterm
PWD=/root/faad2-2.7
TZ=Europe/Amsterdam
LESS=-X
# make
make all-recursive
Making all in libfaad
source='bits.c' object='bits.lo' libtool=yes \
DEPDIR=.deps depmode=none /bin/bash ../depcomp \
/bin/bash ../libtool --tag=CC --mode=compile cc -DHAVE_CONFIG_H -I. -I.. -iquote ../include -g -c -o bits.lo bits.c
cc -DHAVE_CONFIG_H -I. -I.. -iquote ../include -g -c bits.c -KPIC -DPIC -o .libs/bits.o
cc: illegal option -quote
Seems the "-iquote" flag is passed to studio cc as well. As I can't locate gcc 4.x (or 3.5+) quickly and don't want to put myself into the swamp called "building gcc 4 for opensolaris" just yet, I'll try and find out where the "-iquote" comes from...

Brute force approach:
# find . -type f -exec grep -l iquote {} \;
./libfaad/Makefile
./libfaad/Makefile.am
./libfaad/Makefile.in


Ah. So it's only in the libfaad makefiles! Let's see what they say:

# cd libfaad
# find . -type f -exec grep iquote {} \;
AM_CFLAGS = -iquote $(top_srcdir)/include
AM_CFLAGS = -iquote $(top_srcdir)/include
AM_CFLAGS = -iquote $(top_srcdir)/include


So what's the AM_CFLAGS in other makefiles?
# cd ..
# find . -type f -exec grep "^AM_CFLAGS" {} \+
./faad2-2.7/libfaad/Makefile:AM_CFLAGS = -iquote $(top_srcdir)/include
./faad2-2.7/libfaad/Makefile.am:AM_CFLAGS = -iquote $(top_srcdir)/include
./faad2-2.7/libfaad/Makefile.in:AM_CFLAGS = -iquote $(top_srcdir)/include
./faad2-2.7/plugins/mpeg4ip/Makefile:AM_CFLAGS = -D_REENTRANT -fexceptions
./faad2-2.7/plugins/mpeg4ip/Makefile.in:AM_CFLAGS = -D_REENTRANT -fexceptions
./faad2-2.7/plugins/mpeg4ip/Makefile.am:AM_CFLAGS = -D_REENTRANT -fexceptions

So. Let's find some recent gcc documentation and see what it suggests about it's workings. Maybe I can just replace it with a simple "-I" directive...

Quoting from the most recent (4.3.3) documentation:
-Idir
Add the directory dir to the head of the list of directories to be searched for header files. This can be used to override a system header file, substituting your own version, since these directories are searched before the system header file directories. However, you should not use this option to add directories that contain vendor-supplied system header files (use -isystem for that). If you use more than one -I option, the directories are scanned in left-to-right order; the standard system directories come after.

If a standard system include directory, or a directory specified with -isystem, is also specified with -I, the -I option will be ignored. The directory will still be searched but as a system directory at its normal position in the system include chain. This is to ensure that GCC's procedure to fix buggy system headers and the ordering for the include_next directive are not inadvertently changed. If you really need to change the search order for system directories, use the -nostdinc and/or -isystem options.

-iquotedir
Add the directory dir to the head of the list of directories to be searched for header files only for the case of `#include "file"'; they are not searched for `#include <file>', otherwise just like -I.

So. If I just briefly chech the include files for their syntaxis of the #include statements, and make sure that they double quote their files, I should be able to replace the "-iquote" with "-I" and remain with the same compilers is my current theory.

Let's go about our work then:
# find . -name \*.h | wc -l
94

so 94 targets for includes. That may be a lot of work...

Looks like there's plenty of double-quoted #includes there:

# find . -name \*.h -exec fgrep '#include' {} \+ | head
./common/mp4ff/mp4ff.h:#include
./common/mp4ff/mp4ff.h:#include "mp4ff_int_types.h"
./common/mp4ff/mp4ffint.h:#include "mp4ff_int_types.h"
./common/mp4ff/mp4ffint.h:#include
./common/mp4ff/mp4ffint.h:#include "../../config.h"
./common/mp4ff/mp4ff_int_types.h:#include
./common/mp4ff/mp4ff_int_types.h:#include
./common/faad/aacinfo.h:#include "filestream.h"
./include/faad.h:#include "neaacdec.h"
./libfaad/syntax.h:#include "bits.h"
So. I'll assume that lines containing at least one single double-quote, will contain two. Also, (not too sure here) I'm assuming the bracketed includes (e.g. ) need no quoting.

So:
# find . -name \*.h -exec fgrep '#include' {} \+ | egrep -v '"|>'
#
Ergo, I'll assume for now that I can safely replace -iquote with -I. Since I couldn't find out easily where the -iquote gets set (I mean, the ./configure generates the makefiles, so it should be set there right? but can't find the clues in there to mitigate it.)

So. I replaced the offending "-iquote" with "-I", started from scratch with the 1.10 versions of aclocal and automake, configured with
CC=/usr/sfw/bin/gcc
#./configure --with-mp4v2
[ ... output omitted ... ]
and
# make
[ ... output omitted ... ]

completed!

I had a brief go at the xmms plugin, but that failed to build in the first go. Since it doesn't seem too releveant (I hope), I'll have a go at faac now, expecting to be able to mitigate the nasties I ran into rather quickly.

faac seems to suffer from similar woes: I'll le
ave the symlinks for aclocal and automake in place for now and apply similar fixes to the bootstrap file. All looks well (well. what do I know?) and a configure script is created.

The INSTALL file suggests nothing odd, no special flags, so let hit it:

# ./configure
[ ... output omitted ... ]
checking for off_t... yes
checking for in_port_t... yes
checking for socklen_t... yes
checking for fpos_t.__pos... no
configure: creating ./config.status

.infig.status: error: cannot find input file:
# ls .infig.status
.infig.status
Link
Google for .infig.status suggests this may have to do with dos files screwing up this...

Hmm. I noticed a few more DOS files in faad as well, but they didn't seem to give me any trouble then...

Introducing a sledgehammer:
# find . -type f -exec file {} \; | egrep 'text|XML|script|html' | cut -d":" -f1 | while read f ; do cat $f | dos2unix > f ; mv f $f ; done

Now, let's see if we can make this puppy.

# make
fails.

I spotted:
psych.h:85:21: warning: no newline at end of file


and fixed that. Not that the make will complete because of that...

Next I spotted:
g++ -DHAVE_CONFIG_H -I. -I../.. -I../../include -Wall -c -o 3gp.o 3gp.cpp
../../depcomp: line 566: exec: g++: not found
*** Error code 127
make: Fatal error: Command failed for target `3gp.o'

hmm. makes sense: /usr/sfw/bin is not in the path. So let's set CXX to /usr/sfw/bin/g++ and try again (funny that the configure didn't check for that...)

# make clean

# ./configure
# make

Now it completes!

# make install

So. That's the dependencies worked out! jripper is a jar file, so let's see what else needs to be done except a:

$ java -jar jripper.jar













Shows the config screen with the correct binary applications compiled above setup. Pity as oggenc and oggdec aren't present on opensolaris, so no ogg support.

Let's see if we can get this puppy to fly!

Well. I'll need to look into the cd device:
the log says:
19:49:40 ReadParam: /usr/bin/cdda2wav -D 1,0,0 -g -H -J -v toc,title,sectors
19:49:40 ProcessRunner::setStreams() thread.type=READ_STDERR_LINES thread.write=false aReadProcess=java.lang.UNIXProcess@94310b aReadFileName=null
19:49:40 /usr/bin/cdda2wav.bin: This is neither a scsi cdrom nor a worm device.
19:49:40 Can't read disc id number!
19:49:41 Exit code read process=1
19:49:41 Exit code write process=0
19:49:41 Has failed=true
19:49:41 Can't read disc id number!
19:49:42 Can't read disc id number!

So. I'll need to suppy something that's digestible for cdda2wav.
# cdda2wav
No target specified, trying to find one...
Using dev=0,1,0.

Configuring the setup page with 0,1,0 as the cd device, shows (with a dvd in the drive):
19:52:58 ReadParam: /usr/bin/cdda2wav -D 0,1,0 -g -H -J -v toc,title,sectors
19:52:58 ProcessRunner::setStreams() thread.type=READ_STDERR_LINES thread.write=false aReadProcess=java.lang.UNIXProcess@129645a aReadFileName=null
19:52:59 Type: ROM, Vendor 'HL-DT-ST' Model 'DVDRAM GSA-T20N ' Revision 'WP03' MMC+CDDA
19:52:59 274432 bytes buffer memory requested, transfer size 57344 bytes, 4 buffers, 24 sectors
19:52:59 /usr/bin/cdda2wav.bin: Read Full TOC MMC failed (probably not supported).
19:52:59 #Cdda2wav version 2.01.01a53_sunos5_5.11_i86pc_i386, real time sched., soundcard, libparanoia support
19:52:59 Tracks:1 255:57.74
19:52:59 CDINDEX discid: flplyXqMOiodZEDJeDw5Ci6OD_g-
19:52:59 CDDB discid: 0x023bfd01
19:52:59 CD-Text: not detected
19:52:59 CD-Extra: not detected
19:52:59 Album title: '' from ''
19:52:59 T01: 0 255:57.74
19:52:59 Leadout: 1151849
19:52:59 /usr/bin/cdda2wav.bin: This disk has no audio tracks.
19:53:00 Exit code read process=1
19:53:00 Exit code write process=0
19:53:00 Has failed=true

So. It looks like jripper is able to rip audio tracks from cd's, but isn't capable of ripping dvd's (with this bit of hardware anyways).
That's quite the disappointment I'd say after quite a bit of effort and I wasn't even expecting the regular "./configure ; make ; make install .

Well. off to bed.

Thursday, February 19, 2009

ending LD_LIBRARY_PATH woes

As I have the habit of installing most of the hand built tools into /usr/local, I quite frequently run into troubles when secific apps require additions to the library path.

The rationale being that /usr/local seems to be somewhat of a default, and with various software repositories for solaris such as blastwave, sunfreeware etc have a tendency to install somewhere in /opt, This keeps at least some sanity to my filesystems.

As any lazy bastard, I normally would fix this by temporarily setting LD_LIBRARY_PATH to include the correct path and fire up the application.

Solaris and opensolaris have had a clean alternative to that a long time: via crle...

Here's how to add /usr/local/lib to it:
# crle

Default configuration file (/var/ld/ld.config) not found
Platform: 32-bit LSB 80386
Default Library Path (ELF): /lib:/usr/lib (system default)
Trusted Directories (ELF): /lib/secure:/usr/lib/secure (system default)
# crle -u -l /usr/local/lib
# crle

Configuration file [version 4]: /var/ld/ld.config
Platform: 32-bit LSB 80386
Default Library Path (ELF): /lib:/usr/lib:/usr/local/lib
Trusted Directories (ELF): /lib/secure:/usr/lib/secure (system default)

Command line:
crle -c /var/ld/ld.config -l /lib:/usr/lib:/usr/local/lib

#
Note: when the config file exists, the single "-l" will overwrite the previous entries, however in the "special" case of a nonexistent /var/ld/ld.config, it will add the directory after the -l path.

Monday, January 19, 2009

pidgin, facebook plugins and opensolaris (nevada)


I recently stumbled over a facebook-plugin for pidgin and thought I'd give it a try and build it on my fresh install of nv_105.

Downloaded both the icons and the source...

# bunzip2 pidgin-facebookchat-source-1.47.tar.bz2
# /usr/sfw/bin/gtar xvf pidgin-facebookchat-source-1.47.tar
# cd pidgin-facebookchat

# make libfacebook.so
# cp libfacebook.so /usr/lib/purple-2/

# cd /usr/lib/purple-2/
# chgrp bin libfacebook.so

Icons were easy...

# bunzip2 pidgin-facebookchat-1.47.tar.bz2
# /usr/sfw/bin/gtar xvf pidgin-facebookchat-1.47.tar
# cp ./usr/share/pixmaps/pidgin/protocols/22/facebook.png /usr/share/pixmaps/pidgin/protocols/22/
# cp ./usr/share/pixmaps/pidgin/protocols/48/facebook.png /usr/share/pixmaps/pidgin/protocols/48/
# cp ./usr/share/pixmaps/pidgin/protocols/16/facebook.png /usr/share/pixmaps/pidgin/protocols/16/

Damn. by far the easiest I've run into...