An Investigative Journey into Bitcoin

I am currently applying for a MasterCard from Wirex to investigate the possibility of taking payment in Bitcoin. I will let you know how this goes, as and when. Well someone has to do it. At present the Android app does not work with my device, even though the Android version is 4.4, and the app needs 4.2 and up. Strange.

Next I downloaded Ethereum to check out distributed processing ideas on the blockchain. An interesting idea, but quite a lot of documentation to look over. Bitcoin wallets are easy to find, even in mobile form, but Ethereum has the edge of a programmatic layer. I think the solution to the Ethereum block fork, is 1000 ETH in total haircut proportional split, and dump into the account. Although clever, the detach code was not worth “value gained”, and resulted in a experimental ETC net with lower volume.

The blockchain of Bitcoin is quite big, and there are some interesting competing technologies. I’m also checking out Monero for an alternative, and there are some interesting projects in alpha test. I mainly chose the ones I did for Bitcoin standard (an Android wallet), and because of high trading volume or useful extra technology layers.

So I’m up and micro mining ETH and XMR (Monero?) and will let you know. It’s not easy to set up, but with enough technical knowledge you can micro mine on a laptop. Some algorithms are more suited to this than others. Of note be aware I’m not mining Bitcoin. That would be a fool’s game without custom SHA-256 ASICs. Other coins to consider are LTC which is easier than BTC for micro mining, and some of the other highly traded which do not use SHA-256, although it does look like the LTE hash will place the mining in large corporate ASIC hands, as it did so happen for BTC. LTC has a good android wallet very similar to an excellent BTC wallet.

Something like C:\Windows\System32\cmd.exe /K C:\Users\user\Desktop\installed\Ethereum-Wallet-win64-0-8-2\resources\node\geth\geth.exe --mine could prove useful if you want an ETH miner launch from Windows desktop. You can’t run the wallet at the same time though. You may also need switches if you use GPU, and to alter the number of processor cores used.

MyMonero is an excellent idea, but getting Monero might be the most difficult part. It is effectively a most useful tool. This completes my investigation into cryptocoins. I hope you find some tools useful for you.

Web of Things

I went to an interesting meetup yesterday about WoT, with guest speakers from the W3C and Jeo. It was all about the efforts to standardize a representation of capability and interop of various IoT devices to bring them an off the shelf ease of integration. Very good it was. Anybody want to by a used car? Just a little humour there.

My ideas involved specifying an extension of SI units, and perhaps having just one reserved JSON key for placing all the automation behaviour under. Various extra things would have to be in the units, such a point, and range specifiers, maybe the dimensional count along with the units of a dimension. For more algorithmic actions, JavaScript functions could be specified, using a limited subset of definitions, and maybe a way of putting an active low level comment in such a way maybe using the JS reserved word native due to the common reserved word basis between Java and JS.

Quite a few hidden units would be needed, with character similar to mol, and likely u for micro, and maybe even getting each unit down to just one letter. Just how much would be possible with 26 units? Assuming things like space and time resolution (or the more operative protocol and not base quantities) could use the JSON key of such a unit, and the value could be expressed in the providing units. There could for example be provided 2 virtual units which expressed measure of input and output, so say 2 rad per second output could obviously be considered a motor speed controller. The $ key could express the primary capability, and the say the “per second output range” specifier in the sub object could specify the rad per second output per seconds range. And the “rad per second output range” specifier could key the speed range in rad per second.

It’s all vague at present, but that’s the meetup purpose. Nice time guys, and gals.

Latest CODEC Source GPL v2+

The Latest compression CODEC source. Issued GPL v2 or greater. The context can be extended beyond 4 bits if you have enough memory and data to 8 bits easily, and a sub context can be made by nesting another BWT within each context block, for a massive 16 bit context, and a spectacular 28 bit dictionary of 268,435,456 entries. The skip code on the count table assists in data reduction, to make excellent use of such a large dictionary possible.

The minor 4 bits per symbol implicit context, has maximum utility on small dictionary entries, but the extra 16 times the number of entries allows larger entries in the same coding space. With a full 16 bit context enabled, the coding would allow over 50% dictionary symbol compression, and a much larger set of dictionary entries. The skip coding on large data sets is expected to be less than a 3% loss. With only a 4 bit context, a 25% symbol gain is expected.

On English text at about 2.1 bits per letter, almost 2 extra letters per symbol is an expected coding. So with a 12 bit index, a 25% gain is expected, plus a little for using BWT context, but a minor loss likely writes this off. The estimate then is close to optimal.

Further investigation into an auto built dictionary based on letter group statistics, and generation of entry to value mapping algorithmicaly may be an effective method of reducing the space requirements of the dictionary.

A Server Side Java Jetty Persistent Socket API is in Development

I looked into various available solutions, but for full back end customization I have decided on a persistent socket layer of my own design. The Firebase FCM module supplies the URL push for pull connections (Android client side), and an excellent SA-IS library class under MIT licence is used to provide FilterStream compression (BWT with contextual LZW). The whole thing is Externalizable, by design, and so far looking better than any solution available for free. Today is to put in more thought on the scalability side of things, as this will be difficult to rectify later.

Finding out how to make a JavaEE/Jetty Servlet project extension in Android Studio was useful, and I’d suggest the Embedded Jetty to anyone, and the Servlet API is quite a tiny part of the full jetty download. It looks like the back end becomes a three Servlet site, and some background tasks built on the persistent streams. Maybe some extension later, but so far even customer details don’t need to be stored.

The top level JSONAsync object supports keepUpdated() and clone() with updateTo(JSONObject) for backgrounded two directional compressed sync with off air and IP number independent functionality. The clone() returns a new JSONObject so allowing edits before updateTo(). The main method of detecting problems is errors in decoding the compressed stream. The code detects this, and requests a flush to reinitialize the compression dictionary. This capture of IOException with Thread looping and yield(), provides for re-establishment of the connection.

The method updateTo() is rate regulated, and may not succeed in performing an immediate update. The local copy is updated, and any remote updates can be joined with further updateTo() calls. A default thread will attempt a 30 second synchronization tick, if there is not one in progress. The server also checks for making things available to the client every 30 seconds, but this will not trigger a reset.

The method keepUpdated() is automatically rate regulated. The refresh interval holds off starting new refreshes until the current refresh is completed or failed. Refreshing is attempted as fast as necessary, but if errors start occurring, the requests to the server are slowed down.

The method trimSync() removes non active channels in any context where a certainty of connectivity is known. This is to prevent memory leaks. The automatic launching of a ClientProcessor when a new client FCM idToken is received, looks nice, with restoration of the socket layer killing ones which are not unique. The control flow can be activated and code in the flow must be written such that no race condition exists, such as performing two wrights. A process boot lock until the first control flow activator provides for sufficient guard against this given otherwise sequential dependency of and on a set of JSONAsync objects.