Installing AWS CLI on Solaris 10


A (former) colleague needed to transfer an Oracle Data Pump file to Amazon S3 (in order to allow Amazon RDS Oracle to import said Data Pump file), but the origination server was running Oracle Solaris and the customer needed the AWS CLI on that server. AWS documentation has no steps for installation of the AWS CLI on Solaris, so I decided to write up this short note.

The test system I used for validation was a SunFire V100 (a circa-2003 Sun SPARC machine with a single-core, single-socket UltraSPARC IIi clocked at 650MHz, and probably slower than a Raspberry Pi). This test system was running Oracle Solaris 10 update 11 (from here) with the CPU OS Patchset 2018/01 Solaris 10 SPARC.

Output of uname -a is

SunOS v100.local 5.10 Generic_150400-59 sun4u sparc SUNW,UltraAX-i2

The first required step is to install a later version of Python (even with the 2018/01 patchset, Solaris 10 comes with Python 2.6.4 which is far too old). Note that this Python package is from OpenCSW Solaris Packages, which is a third party. Enterprise customers would need to have their security teams validate whether OpenCSW is a permissible source of packages.

pkgadd -d http://get.opencsw.org/now

/opt/csw/bin/pkgutil -U

/opt/csw/bin/pkgutil -y -i python27

Once Python 2.7 is installed, you can then download the AWS CLI, version 1 (note that support even for Python 2.7 in the AWS CLI is ending very soon, mid-2021 at the time of this writing, so a specific AWS CLI version that does support Python 2.7 must be used). Note that we have to use a particular switch with wget because the version of wget that comes with Solaris 10 is very old and does not recognize the SSL certificate from Amazon S3.

wget --no-check-certificate "https://s3.amazonaws.com/aws-cli/awscli-bundle-1.19.24.zip"

unzip awscli-bundle-1.19.24.zip

We can then install the AWS CLI, but we need to make sure the right version of Python is used, i.e. it must be ahead of the system Python in the path.

export PATH=/opt/csw/bin:$PATH

./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws

After the installer completes, the AWS CLI is successfully installed, and you can configure credentials as appropriate.

3D-Printed Adapter for ZWO ASI120 to Lacerta OAG and Canon 650D

Lacerta makes a very low-profile off-axis guider with an M48 thread on one side, and a Canon EF bayonet on the other side. It is reasonably well-made, but the (used) one that I bought had some pretty serious grub screw markings on the guide camera pick-off tube due to over-tightening of the setscrews that hold the guide camera T-mount to the pick-off tube.

I can see why the previous owner did this: if the setscrews aren't tight, the guide camera can rotate, particularly if the guide camera's USB cable gets snagged on something.

I decided to 3D-print an adapter that would screw into the M4 threaded holes on the back of the ZWO ASI120 guide/planetary camera, and which in turn would hold the camera at the right distance to achieve focus, and bolt to the 1/4" 20tpi tripod socket on the Canon EOS650D DSLR.

It's a very simple design, and would need to be modified for any other guide camera and potentially DSLR, since the sizes are different and the focal points are also different. In this case, I'm using a William-Optics Flat6A flattener/reducer, with its native M48 interface at the rear.

The design is here.

An added bonus is that the 3D-printed adapter also helps to protect the guide camera and OAG from mishandling. Without the adapter/reinforcement, imaging what would happen if you dropped the DSLR: the pick-off tube on the OAG would most likely get bent out of shape.

And here are a few photos:






Intel and AMD Processor Micro-Benchmarking

There are a large number of synthetic CPU benchmarks available - for example, GeekBench, JetStream, SPEC. The utility of these benchmarks for whole-system performance is debatable. Then we have benchmarks that attempt to measure whole-system performance; for example the time-honored Linux kernel compilation, and elaborate benchmarks such as SAP Sales and Distribution (SAP SD), otherwise known as the famous "SAPS rating."

Here I am attempting to measure some degree of whole-system performance by using ffmpeg to transcode Big Buck Bunny. This is a CPU-bound (more correctly, FPU-bound) benchmark with some memory and I/O load due to the very large size of the movie. I've used a statically-linked binary that is not particularly optimized for particular processor features or GPU's (ffmpeg can greatly speed up transcoding on Nvidia GPU's).



Here are the necessary steps to replicate my results (these are for Linux; on MacOS, I used the ffmpeg distribution from brew but the steps are otherwise identical):

wget https://johnvansickle.com/ffmpeg/releases/ffmpeg-release-amd64-static.tar.xz

tar xf ffmpeg-release-amd64-static.tar.xz

wget http://distribution.bbb3d.renderfarming.net/video/mp4/bbb_sunflower_1080p_60fps_normal.mp4

for i in 1 2 3; do
rm -f output.mp4; time ffmpeg-*-amd64-static/ffmpeg -threads 2 -loglevel panic -i bbb_sunflower_1080p_60fps_normal.mp4 -vcodec h264 -acodec aac -strict -2 -crf 26 output.mp4 2>&1 >>out.txt
done

Note that we are limiting the number of threads that FFMPEG can use to 2, which allows it to only use 2 cores. On a 4-core (or more..) machine, the encoding results are much better, but since many of my data points are from 2-core machines, we have to limit the number of threads to 2 in order to have an apples-to-apples comparison.

Note that on a 2-core hyper-threaded system, "in theory" 4 threads is ideal; however, hyper-threading is really only relevant for I/O-bound workloads, and since FFMPEG is CPU-bound, a thread limit of 2 is more appropriate.

We can see on this simple test, that for the MacOS trials:
  • there is a 37% performance improvement from Sandy Bridge to Broadwell (3 generations)
  • 17% improvement from Broadwell to Kaby Lake (2 generations)
Over 5 generations there is a cumulative improvement of 48%.

For the AWS M instance family:
  • 13% from Sandy Bridge (m1) to Ivy Bridge (m3) (1 generation)
  • 14% from Ivy Bridge (m3) to Broadwell (m4) (2 generations)
  • 13% from Broadwell (m4) to Skylake (m5) (1 generation)
Over 4 generations there is a cumulative improvement of 35%.

For the AWS C instance family:
  • 28% from Ivy Bridge EP to Haswell (1 generation)
  • 12% from Haswell to Skylake (2 generations)
Over 3 generations there is a cumulative improvement of 37% - but this is also partially due to differing clock speeds.


We normally would consider a benchmark such as SAPS to be a rigorous, whole-system benchmark because SAPS measures order line items per hour (an application metric) across infrastructure (CPU, memory, I/O), operating system, Java virtual machine, database, and ERP application. But it very much seems that SAPS is essentially a CPU benchmark.

Consider the following:
  • SAP certification #2015005 from 2015-03-10 (AWS c4.4xlarge, 8 cores / 16 threads) - 19,030 SAPS or 2,379 SAPS/core
  • SAP certification #2015006 from 2015-03-10 (AWS c4.8xlarge, 18 cores / 36 threads) - 37,950 SAPS or 2,108 SAPS/core
Here we observe almost linear scaling - as the number of cores/threads is increased from 8 to 18 (2.25X) the SAPS increases from 19,030 to 37,950 (1.99X).

If we consider the SAPS results for the previous-generation AWS C3 instance family:
  • SAP certification #2014041 from 2014-10-27 (AWS c3.8xlarge, 16 cores / 32 threads) - 31,830 SAP or 1,989 SAPS/core
The C3 result is about 6% lower than the c4.8xlarge on a per-core basis. If we recall the naive Big Buck Bunny transcoding benchmark, the C4 is about 12% faster than C3. Thus it appears that SAPS is not purely a CPU benchmark (as it should be) but is strongly CPU-dominated (at least half of the SAPS is directly attributable to CPU performance).

Naively concluding, there appears to be (on average) around 10% performance improvement across Intel CPU generations (across tick and tock). This means CPU performance doubles in 6.9 years (87 months - a far cry from Moore's Law which optimistically predicted 18 months