OpenHAB war recht zügig installiert und startklar. Nachdem ich einige Zeit im OpenHAB-Wiki verbrachte und die Demo-Dateien studierte, fing ich mit einer leeren OpenHAB-Konfiguration an und begann, die ersten Items und Regeln zu erstellen.
Schon seit längerer Zeit hatte ich mich für das Thema Smart Home und die Automatisierung interessiert, jedoch hatten mich die Preise der Geräte doch immer wieder davon abgehalten, in ein System zu investieren.
Irgendwann im Frühjahr 2015 machte ich ein Foto von meinem Stromzähler wegen des Zählerstandes und war verwundert, dass auf dem Handy ein Blinken zu erkennen war, das mit bloßem Auge nicht zu erkennen war. Offenbar war es Infrarotstrahlung, die die Handykamera nicht vernünftig filterte.
As my internet line bandwidth is terrible every day during „prime time“ (about between 8pm and 11pm) I wanted to create a small test to have a proof to show to my ISP as well as to find out the critical hours.
As my ISP always tells me to test with a specific speed test (actually it resides on a web page owned by the provider, giving a small distance in hops), I started analyzing what the speed test really does.
The page is http://speedtest-1.unitymedia.de/
As it turns out, the speed test is a flash application. When analyzing the network traffic you quickly find out that the speed test basically does nothing more than downloading a huge 4000 x 4000 pixels random jpg file and then measuring the time it took to download it:
The address is called with a parameter called ‚x‘ which is randomized at every new test.
Knowing how to do it, I fired up Notepad++ and created a small script in node.js that repeats this test every two minutes and dumps the result in a syslog file that I can feed into my small Splunk instance to be able to graph it:
var syslog = require('./syslog');
var syslog = require('./syslog');
var http = require('http');
var intervalSeconds = 120;
var totalMBit = 0;
var totalBytesPerSec = 0;
var numRunning = 0;
return _bps * 8.0 / (1024 * 1024);
if( numRunning > 0 )
totalMBit = 0;
for( var i=0; i<6; ++i )
// var url = 'http://speedtest-1.unitymedia.de/speedtest/random4000x4000.jpg';
var url = 'xxxxx';
url = url + "?x=" + (Math.random() * 100000.0 + 10000.0);
var size = 0;
var t0 = 0;
var t1 = 0;
console.log( "Connecting to " + url );
var request = http.get(url, function(res)
// console.log('HEADERS: ' + JSON.stringify(res.headers));
var dt = t1 - t0;
var bytesPerSec = size / dt * 1000.0;
var mbit = bpsToMBit(bytesPerSec);
totalMBit += mbit;
totalBytesPerSec += bytesPerSec;
console.log( "size " + size + ", bytes per sec " + bytesPerSec + ", mbit " + mbit + ", " + numRunning + " parts left" );
if( numRunning == 0 )
console.log( "finished, size " + size + ", total bytes per sec " + totalBytesPerSec + ", total mbit " + totalMBit );
syslog.log( "BANDWIDTH" , "isp", "Unitymedia"
, "mbit", totalMBit
, "bytespersec", totalBytesPerSec
, "size", size );
var tCurrent = Date.now();
if( t0 == 0 )
t0 = tCurrent;
size += data.length;
t1 = tCurrent;
console.log("Error: " + e);
setInterval( testBandwidthMultiple, 1000 * intervalSeconds);
So while typing this article, the test is now running since some minutes and the first graph can be shown:
Given the fact that I pay for a 100 mbit line, this is terrible. Of course, I’ll post a longer graph tomorrow when the script has been run for a longer time.
Update October 1st, 2015:
So as promised, here is a graph for the last 24 hours:
As you can see, there is a huge bandwidth drop starting at about 6pm, lasting until about 1am. This also happend one day before.
Additionally, you can see that I never managed to get above 50MBit and I was starting to hunt for a bug in my code but didn’t find one. I modified the code now to use five connections at once (code snipped above is updated), but still there was no large increase. Then suddently the speed test from Unitymedia went down and I was no longer able to test it further so I had to find a new host where I can download test data from.
At first, I tried my small V-Server but as you can see, it couldn’t provide enough bandwidth. But after that, finally I’ve found a new host so the tests can continue. As you can see, I finally reach the promised 100 Mbits so this evening will be the first day where you can see the full bandwidth drop. The next image will be posted tomorrow.
Update October 6th, 2015:
Sorry for being late, but here are the new graphs:
As you can see, the problem gets worse on weekends. So far, a Unitymedia guy will be here on Wednesday to check my line. Doesn’t make much sense in my case because the SNR levels on the modem look okay, but anyway he will be there.
Recently, I’ve been working on a larger game project which should run on the Adobe Flash and Adobe AIR platforms.
TL;DR – If you want to create 3D realtime games, use something different, don’t use Flash.
Because the preferred client technology was Flash anyway, we did some quick tests with some groups of meshes rotating around. The results were okay so after some additional tests, we gave the „OK“ to do it with Flash.
Development has started with Away3D 4.0 beta, at that time, Away3D had a lot of bugs and a lot of features were missing. Some of the bugs have been fixed by myself, others were reported and were fixed by the Away3D team.
Let me say that, although Away3D looked unfinished at that time and even nowadays lacks some features, in general, Away3D is not a bad engine and looks like it has been coded by people who know how to do it right.
But whatever improvement the Away3D team or me implemented, in the end we all had to cope with the poor Stage3D interface.
For standard textures, the size is limited to be power-of-two. If you work for mobile devices, you know for sure that you need to save memory wherever you can. POT-textures force you to insert empty areas into a texture which will eat up additional memory. As plenty of mobile devices accept NPOT-textures, this hard restriction doesn’t make sense.
You can create standard textures without mip maps. But even if you do not need them, Flash allocates the whole space for mipmaps on the GPU, always.
Recently, a new texture type became available, which is called „RectangleTexture“ (who can I blame for this name?) which supports NPOT-textures. Having this advantage now, the downside of RectangleTexture is: There is no mip map support at all.
Flash supports runtime texture compression. Not a bad thing, but this feature is not supported on mobile devices! Mobile devices badly need it while it is not so important for web, so why isn’t it available?
There is also an offline tool available, png2atf, but it takes several minutes for a single texture to be compressed.
Additionally, most of the time we cannot use them because they look so bad. At least when using alpha textures, the visual artifacts are unacceptable.
Rendering single objects with a lot of polygons works as expected, performance is good. No 3D api usually has problems rendering static meshes, so this is no surprise.
Things get worse when using more draw calls or if geometries need shader parameters. I haven’t seen any API that is slower here. Context3D.setProgramConstantsFromXXX are totally useless if you want to do something real, for example skeletal animation. Using those functions together with drawTriangles will eat up most of your precious CPU time. Not a big problem on desktop systems, but on iOS, you’ll notice how awfully slow the Context3D interface is here.
This is probably „by design“: While the GPU / DirectX / OpenGL work usually with 32 bit floating point data, Flash only supports Numbers, which are 64 bit float. So everything needs to be converted everytime you call these functions. And there is no way around it when you do for example skeletal animation.
I tried to use FlasCC to work around this problem, but guess what, there are no native calls to these functions where you can use C float data. You are forced to use ActionScript-Variants even there, so there is no win at all.
The language used by Adobe for shaders is called AGAL – Adobe Graphics Assembly Language.
Assembly? Yes, indeed. I’ve worked with D3D, OpenGL, NVidia Cg for 10+ years, they all offer high level shading languages. But with Adobe, you get ported back into the 1980s to fight for register usage, use mov, add, sub again. You don’t want to do that in 2014 anymore. Needles to say that it’s also quite limited in terms of available registers, number of opcodes etc.
If you create a 3D game, you need strong math classes. Unfortunately, ActionScript 3 is not the best language for this and the classes that are provided by the runtime lack a lot of features.
In ActionScript, Vector3D is not a value type but a full fledged class with all its advantages but also disadvantages.
There are only three static variables inside, one for each axis. A zero is missing. Due to the fact that all Vector3Ds are references, you can create ugly things if you don’t be careful. For example, take a look at the following code:
var foo:Vector3D = Vector3D.X_AXIS;
What actually happens here is that you invert the value Vector3D.X_AXIS because foo is only a reference. Looking to prank your coworkers? Just fool around with everyones coordinate system.
Missing operator overloading is also something that doesn’t really help to achieve code clarity.
var meshPos:Vector3D = m_camera.forwardVector.clone();
meshPos.scaleBy( distToCam );
meshPos.incrementBy( m_camera.position );
It would be a lot easier to read:
var meshPos:Vector3D = m_camera.position + m_camera.forwardVector * distToCam;
When creating realtime applications, it is eligible to prevent any allocation at all. Unfortunately, because Vector3D is not a value type, flash doesn’t help here. Prevent use of
vecA = vecA.add( vecB );
Because this allocates a new vector. Instead, use
vecA.incrementBy( vecB );
This can be solved, but what if you would like to transform e.g. 10000 Vector3D objects by a Matrix3D? You’re lost. All transformation functions of the Matrix3D class return new Vector3D objects, there is no function to work on a Vector3D inplace. More Matrix3D functions act this way, for example decompose and even the position and rawData properties.
All this leads to a weak math performance of Flash in general.
More language issues are problematic, for example the lack of real inlining. The ASC2 compiler introduced an inlining feature that is so limited that it can be used only on very simple functions. It is not automatic, but must be inserted manually. If you overuse it, you’ll often end up with cryptic compilation errors like „stack underflow“. If you also use C++ sometimes, with „inlining support“ you for sure expect more than what Adobe has to offer here.
Ever tried to deploy a larger AIR app to iOS? Waiting more than 30 minutes caused us to use separate build machines to build iOS versions. They must be optimizing really hard, you may think. Would be nice if the performance would justify this, unfortunately, it doesn’t.
With regards to Stage3D, there isn’t any that I’m aware of. There is Starling, but as long as you cannot use your UI that you designed in Flash Pro, it’s more or less useless. We had the idea to write a converter, but animated MovieClips are problematic because you won’t win any performance when rendering a MovieClip to a texture every frame. Or you’ll lose a lot of memory for sprite sheets that may become very large.
So UI with Stage2D is the current solution if you want to have a graphics artist to be able to design it.
Be warned, the 2D scanline renderer of Flash is awfully slow on mobile devices, if your artist really does a good job visually, you’ll probably end up spending days to optimize his number of display objects to something acceptable to reach at least 15 frames/sec.
Adobe decided to always render Stage2D above Stage3D. This may be useful in most cases, but imagine a 2D UI control that should display details of a 3D unit. You’ll start cuting holes into the 2D interface using masking which makes the 2D UI even slower.
An incomplete list of some of the things that happen more or less on a daily base.
- In Flash Pro, you better split a bigger UI into multiple SWCs and embed them so the artist can work without interrupting the coders. Randomly while compiling the main project, you’ll end up with „cannot convert foo to MovieClip“ while the Flash runtime constructs the display object hierarchy. Just re-publish in Flash Pro and pray to get it running.
- iOS frame rate drops down heavily on every touch event because the AIR runtime searches for SimpleButton instances inside the whole display list. Adobe is aware of the issue but doesn’t fix it.
- AIR deployment on iOS fails most of the time. Sometimes with an error message, sometimes not. Just try it again, sometimes it works. I switched to iFunBox (App/file manager for iOS devices) to prevent getting annoyed too often.
- TextField.textWidth and TextField.textHeight may return wrong numbers if text fields use embedded fonts. Highly depends on the content.
- Adobe Scout (realtime profiler from Adobe, part of the Adobe Gaming SDK or Creative Cloud) tracks less than 20% of the really used memory in a lot of cases. Additionally, „Other memory“ and „Other bitmap data“ do not really help identifying memory issues at all.
- Some areas of the 2D UI are sometimes not redrawn. Doesn’t happen in all browsers and varies between different Flash player versions and also debug/release builds.
- If you fail to set up your native extension properly, iOS version will hang in its splash screen without any useful info to debug what the problem is.
Tips & Tricks
- Vector3D.length is slower than calculating it manually, both on Web & iOS, tested with AIR 4.0 / FP 12.
- Vector3D.copyFrom() doesn’t copy the w component, remember this if you use Vector3D e.g. for colors
- A static Vector3D to be used as temporary vector for calculations can be a huge speed improvement.
- Throw away your BitmapData with dispose as soon as you uploaded your bitmap to a Texture to prevent doubled memory usage
- To get rid of 2D/3D sorting problems, on desktop, Context3D.renderToBitmapData is fast enough to even reach 60 fps, but do not try this on mobile devices.
- TheMiner is a good profiling tool that runs inside your Flash application.
- Rendering a DisplayObject to a texture is costly, but reduces work on the 2D renderer and is usually a good caching solution.
- null-out everything when you don’t need it anymore to reduce garbage collection cycles. Even remove weak event listeners.
- Profiling Hint: Adobe Scout displays memory used by Vector.<int> and Vector.<Number> as „Other“, Vector.<Boolean> as „ActionScript objects“, Vector.<String> as both „Other“ (about 1/3) and „ActionScript Objects“ (about 2/3)