Intel and AMD Processor Micro-Benchmarking

2018009 There are a large number of synthetic CPU benchmarks available out there - for example, GeekBench, JetStream, SPEC. The utility of these benchmarks for whole-system performance is debatable. Then we have benchmarks that attempt to measure whole-system performance; for example the time-honored Linux kernel compilation, and elaborate benchmarks such as SAP Sales and Distribution (SAP SD), otherwise known as the famous "SAPS rating."

Here I am attempting to measure some degree of whole-system performance by using ffmpeg to transcode Big Buck Bunny. This is a CPU-bound (more correctly, FPU-bound) benchmark with some memory and I/O load due to the very large size of the movie. I've used a statically-linked binary that is not particularly optimized for particular processor features or GPU's (ffmpeg can greatly speed up transcoding on Nvidia GPU's).



Here are the necessary steps to replicate my results (these are for Linux; on MacOS, I used the ffmpeg distribution from brew but the steps are otherwise identical):

wget https://johnvansickle.com/ffmpeg/releases/ffmpeg-release-amd64-static.tar.xz

tar xf ffmpeg-release-amd64-static.tar.xz

wget http://distribution.bbb3d.renderfarming.net/video/mp4/bbb_sunflower_1080p_60fps_normal.mp4

for i in 1 2 3; do
rm -f output.mp4; time ffmpeg-4.1.3-amd64-static/ffmpeg -loglevel panic -i bbb_sunflower_1080p_60fps_normal.mp4 -vcodec h264 -acodec aac -strict -2 -crf 26 output.mp4 2>&1 >>out.txt
done

We can see on this simple test, that for the MacOS trials:

  • there is a 37% performance improvement from Sandy Bridge to Broadwell (3 generations)
  • 17% improvement from Broadwell to Kaby Lake (2 generations)
Over 5 generations there is a cumulative improvement of 48%.

For the AWS M instance family:
  • 13% from Sandy Bridge (m1) to Ivy Bridge (m3) (1 generation)
  • 14% from Ivy Bridge (m3) to Broadwell (m4) (2 generations)
  • 13% from Broadwell (m4) to Skylake (m5) (1 generation)
Over 4 generations there is a cumulative improvement of 35%.

For the AWS C instance family:
  • 28% from Ivy Bridge EP to Haswell (1 generation)
  • 12% from Haswell to Skylake (2 generations)
Over 3 generations there is a cumulative improvement of 37% - but this is also partially due to differing clock speeds.


We normally would consider a benchmark such as SAPS to be a rigorous, whole-system benchmark because SAPS measures order line items per hour (an application metric) across infrastructure (CPU, memory, I/O), operating system, Java virtual machine, database, and ERP application. But it very much seems that SAPS is essentially a CPU benchmark.

Consider the following:

  • SAP certification #2015005 from 2015-03-10 (AWS c4.4xlarge, 8 cores / 16 threads) - 19,030 SAPS or 2,379 SAPS/core
  • SAP certification #2015006 from 2015-03-10 (AWS c4.8xlarge, 18 cores / 36 threads) - 37,950 SAPS or 2,108 SAPS/core
Here we observe almost linear scaling - as the number of cores/threads is increased from 8 to 18 (2.25X) the SAPS increases from 19,030 to 37,950 (1.99X).

If we consider the SAPS results for the previous-generation AWS C3 instance family:

  • SAP certification #2014041 from 2014-10-27 (AWS c3.8xlarge, 16 cores / 32 threads) - 31,830 SAP or 1,989 SAPS/core

The C3 result is about 6% lower than the c4.8xlarge on a per-core basis. If we recall the naive Big Buck Bunny transcoding benchmark, the C4 is about 12% faster than C3. Thus it appears that SAPS is not purely a CPU benchmark (as it should be) but is strongly CPU-dominated (at least half of the SAPS is directly attributable to CPU performance).

Naively concluding, there appears to be (on average) around 10% performance improvement across Intel CPU generations (across tick and tock). This means CPU performance doubles in 6.9 years (87 months - a far cry from Moore's Law which optimistically predicted 18 months

ProLink PIC3002WN Review

This is a discontinued IP camera from ProLink (https://prolink2u.com/product/pic-3002wn/). There is another review here but otherwise not much additional information.

I bought four of these for home surveillance, but have discovered a number of shortcomings which you need to consider when buying these::
  • the iOS client hard-resets my iPhone 7 Plus randomly, although my wife's iPhone 8 is "fairly" stable
  • only SD card recording works reliably, when recording to a NAS (Windows share) the recording randomly stops, there are days and days with no available recordings
  • there is no FTP recording, contradicting the review linked above
  • Dropbox recording works, and so far is reliable; however you would need a paid Dropbox account since the number of files is quite large
  • the cameras sometimes randomly lose their recording settings
These cameras are cheap, and in principle have a lot of features. The video quality is reasonably OK, the IR mode works fine, but unless you use the SD card or Dropbox recording, the "added" features are unreliable. 

50mm on Full Frame Test

This is a contrived test of center and corner sharpness of various 50mm lenses, SLR and rangefinder. I decided to compare the performance of my 1938 Leitz Summar before selling it. The test was fairly simple: I took two pictures of a Schneider wooden cuckoo clock, one at the center of the frame, and one at the edge (but not corner). All photos were taken on a Sony A7-II, with IBIS enabled, and manual focus using the zoom-in button. Distance was about 3 meters (typical portrait or half-body distance) and all lenses were wide-open to maximize aberrations.

Here's the clock face at the center of the image (rotated 90 degrees for convenience):

And here is the same clock face at the edge of the image (also rotated 90 degrees):

And here are the center images:

Canon 50mm f/1.8 LTM

Industar-61LZ 55mm f/2.8

Jupiter-3 50mm f/1.5 Sonnar copy

Jupiter-8 50mm f/2 Sonnar copy

Carl Zeiss Jena Pancolar 50mm f/1.8

Contax Carl Zeiss Planar 50mm f/1.7

Canon 50mm f/1.8 STM

Ernst Leitz Wetzlar 50mm f/2 Summar (1938)

Pentax Super-Takumar 55mm f/1.8

Pentax Super-Takumar 50mm f/1.4 (7-element)

And the edge images:
Canon 50mm f/1.8 LTM

Industar-61LZ 55mm f/2.8

Jupiter-3 50mm f/1.5 Sonnar copy

Jupiter-8 50mm f/2 Sonnar copy

Carl Zeiss Jena Pancolar 50mm f/1.8

Contax Carl Zeiss Planar 50mm f/1.7

Canon 50mm f/1.8 STM

Ernst Leitz Wetzlar 50mm f/2 Summar (1938)

Pentax Super-Takumar 55mm f/1.8

Pentax Super-Takumar 50mm f/1.4 (7-element)

3D-Printed Finder Guider

This is a 3D-printed finder guider that I built from a 60mm achromatic objective in a metal cell from Sheldon Faworski, a length of 2" cardboard mailing tube, and some parts 3D-printed from ABS.

The 3D-printed parts include:
  • an adapter to marry the lens in cell with the cardboard tube (with suitable tapering down that does not vignette the lens aperture)
  • an adapter with two tapped grub screws to allow a standard 1.25" nosepiece to be attached to the finder guider (an ASI120MM is attached)
  • two guide scope rings
The 2" mailing tube was cut to the exact length so that the ASI120MM only just reaches focus. The ABS parts were then epoxied to the cardboard tube.



Viltrox EF-NEX IV AF Adapter on Sony A7 II Brief Impressions

I have written about the Viltrox EF-FX1 before.

TL; DR - the EF-NEX IV performs pretty much the same way as the EF-FX1, is marginally better in some cases, and worse in other cases.

AF performance is by and large acceptable to mediocre, with two notable exceptions: the Canon 16-35mm f/4 L IS and 10-18mm f/4.5-5.6 IS STM will not AF. On the plus side, the A7 II actually manages to AF the Canon 180mm f/3.5L Macro, which the EF-FX1 was unable to. AF is quite slow and not very reliable however.

The EF-NEX IV also manages to report the focal length properly (which the EF-FX1 could not), and it can detect APS-C lenses and automatically crop (I was unable to get the "tunnel view" with the 10-18mm IS STM).