PocketCHIP

The CHIP is and excellent little computer. The production is having difficulties at present, but hopefully this will be sorted out soon. You can even get a keyboard and screen case thing, for mobile use. In short it’s a competitor to the Raspberry Pi, and has a number of benefits of not needing a WiFi dingle, and has SD disk built in, so not needing a card. What hasn’t it got by default? HDMI, many USB ports, free Mathematics, easy SD card swapping, and only has one OS choice. What extra does it have? Supports composite video, an easy portable case is available, has a built in battery charge circuit, costs less, has Bluetooth and WiFi builtin, and if bought with the pocket case with screen, it has some free game software thrown in. A more in depth out of the box review to follow.

Why my interest? Thin client possibilities of the future and present. So far so good. Quite intuitive to use, and only 17% (after an apt upgrade) of the disk used with the default install. The keyboard is a little fiddly but everything is there. There should be some good options for tools building on this. WiFi connects smoothly, no problems. The home key and ALT+TAB are effective for window management, and the included home screen defaults, are for good fun. Notable exceptions to the install are no immediate browsing, or JS/HTML client. No RTF or anything more than a simple text editor. But considering the pocketCHIP form factor, this would be an ideal typing on the go platform, where whipping out a laptop would be excessive, and a tablet might do, but a Windows Tablet will consume more power, and an Android Tablet would not quite be as customizable. Not that this all can’t be sorted given enough skill with Linux, and given the device is designed for such hackery. it’s likely a plus.

For cloud deploy, the missing features are some kind of file sync. something that would make corporate grade application prototyping a breeze. I think config files have to become config directories with date ordering priority for applied last relevance. It could make some kind of key code download architecture for situational setup. Of course this would have to be done at user level privileges. Maybe a git branch tag or ID. So I think the first install is git, and then someway to monkey patch the menu system. Looks like gksu or -A on sudo is going to have to be used to add a script, and then delete the script installer, so as to prevent the hijack of the script at a later date. This would make a wget over https pipe into bash as a one line boot strap.

Browsing the Web

I was fiddling about with all those usual suspects, but the team got it right surf is the browser to got for. Just press help, and then CTRL + G for to enter the matrix. The simplistic excellence. It’s even running a reasonable JavaScript. Sorted. Rendering is excellent with good HTML, and complete crap with absolute sized elements. Lucky a little surf config will reflow some bad sites.

SpliceArray in JavaScript

The current focus in the roitEmbed project is the SpliceArray class. The aim being a tree array structure, where each leaf does not have to be full. The cumulative counts of the leaves make for a simple binary search algorithm to find the correct leaf and element. Splicing involves removing and adding in elements. This makes the expensive linear copy of almost all elements on a large array splice into just insertion and deletion of smaller arrays, and so makes the splice not dependant on the number of array elements. The depth of the array tree needed, depends on the maximum size and is a trade off between differing efficiencies. An optimal size for each leaf node is about sixteen elements. This means the branch nodes have thirty two elements, as both a link and a cumulative count is required. A tree depth of six allows for a 16.7 million element array maximum. This is more than sufficient for any task I can imagine being done by browser side JS.

Collections and Operator Overloading NOT in JS

Well, that was almost a disappointment for optimising ordered collection renders by using arrays. But I have an idea, and will keep you informed. You can check the colls.js as it evolves. I wont spoil the excitement. The class in the end should do almost everything an array can do, and more. Some features will be slightly altered as it’s a collection, and not just an an object with an array prototype daddy. The square brackets used to index arrays are out. I’m sure I’ve got a good work around. I’m sure I remember from an old Sun Microsystems book on JavaScript text indexed properties can do indexing in the array, but that was way back when “some random stuff is not an object” was a JS error message for just about everything.

One of these days I might make a mangler to output JS from a nicer, less coerced but more operator coerce-able language syntax. Although I have to say the way ECMAScript 6 is going, it’s a bit nuts. With pointless static and many other “features”. How about preventing JavaScript’s habit of just slinging un-var-ed variables into the global namespace without a corresponding var declaration in that scope? The arguments for and against are easy to throw a value to a test observer to debug, versus harder to find spelling errors in variable names at parse time. If the code was in flight while failing, then knowing the code will indicate where the fault lies. For other people’s code a line number or search string would be better. Either works for me.

My favourite mind mess would be { .[“something”]; anything; } for dynamic tag based on the value of something in object expressions. Giggle.

Spoiler Alert

Yes, I’ve decided to make the base array a set of ordered keys, based on an ordering set of key names, so that many operations can be optimised by binary divide and conquer. For transparent access a Proxy object support in the browser will be required. I may provide put and get methods for older situations, and also because that would allow for a multi keyed index. The primary key based on all supplied initial keys and their compare functions to use and the priority order [a, b, c, …], and automatic secondary keys of [b, c, …], [c, …], […], … for when there needs to be so, I’ll start the rewrite soon. Of course the higher operations like split and splice won’t be available on the auto secondary keys, but in years of database design I’ve never could have not been one of such form.

The filt.js script will then extend utility by allowing any of the auto secondaries to be treated as a primary on the filter view, and specification of an equals, or a min and max range. All will share the common hidden array of objects in a particular collection for space efficiency reasons. This should make a medium fast local database structure possible, with reasonable scaling. Today and tomorrow though will be spent on a meeting, and effective partitioning strategy to avoid a “full table pull requests” to the server.

Further Improvements

The JSON encoded collation order was chosen to prevent bad comparisons between objects with silly string representations. It might be extended, such that a generic text search, and object key ordering are given some possibility. This is perhaps another use the compression can be put as BWT in the __ module has good search characteristics. Something to think over. It looks as though the code would be slow depending on heavy use of splice. This does suggest an optimization by making another Array subclass named SpliceArray which uses an n-tree with sub element and leaf count and cumulative tally, for O(1) splice performance.

WordPress. What’s it like?

I’ve been using WordPress for a while now, and it’s OK. The admin interface can take a little to get used to, especially when plugins throw menus all over the place, but the online help is very good. The main issues recently were with configuring an email sendmail system so that WordPress could send emails. Upgrading can also be a bit of a pain too, The main issue at the moment is building a dynamic web app. The option to edit WordPress PHP is a no, on the grounds of updates of WordPress source overwrite changes. Creation of a WordPress plugin is also an option, but was not chosen as I want client rendering, not server rendering in the app, to keep the server loading low. I see WordPress as a convenience, not a perfection. So the decision was taken to use client side JavaScript rendering, and have one single PHP script (in the WordPress root folder) which supplies JSON from extra tables created in the WordPress database. An eventual plugin for WordPress may be possible to install this script, and a few JavaScript files to bootstrap the client engine, but not at this time.

This way of working also means the code can be WordPress transparent, and be used for other site types, and an easier one script conversion for node.js for example. With an install base of over 70 million, and an easy templating CMS, WordPress is a good, but pragmatic choice for this site so far. The other main decisions then all related to the client JavaScript stack. I decided to go for riot as the templating engine as it is lightweight and keeps things modular. Some say ReactJS is good, and it does look it, but I found riot.js which looks just as good, is smaller as an include (Have you seen the page source of a WordPress page?) and has client side rendering easily. And then looking at either underscore.js or lodash.js (I picked underscore), for a basic utility. The next up is the AJAX layer. While WordPress does include jQuery, for independence from WordPress tie ins, this ruled out the OK backbone.js and also a fully custom layer allows me to experiment with bandwidth reduction using data compression as a research opportunity. So I have laid out a collections architecture for myself.

Connecting riot.js to this custom layer should be effectively easy. The only other issue was then a matter of style sheet processing to enhance consistency of style. The excellent less.js was chosen for this. Even though client side rendering was also chosen, which is sub optimal (cached, but uses time, but also allows the possibility and later opportunity of meta manipulation of say colour values as sets, for CSS design compression), but does have freedom from tie in to a particular back end solution (a single PHP script at present). So that becomes the stack in its entire form minus the application. Well I can’t wright everything at once, and the end user experience means the application form must be finalized last, as possibility only remains so this side of implementation. For the record I also consider the collections layer a research opportunity. I’ve seen a good few technologies fail to be anything but good demos, and some which should have remained demos but had good funding. Ye old time to market, and sell a better one later for double the sales. Why not buy one quality one later?

The Inquiry Forms Emailing Now Works

Just set up sendmail on the server using these instructions. Now it’s possible to send inquiries. There will also be other forms as and when relevant. The configuration had an extra step of using a cloud file which behaves as a master for sendmail.mc, but this was no big deal. It all took less than ten minutes. A minor complexity is now to forward the response email to somewhere useful to tie up that loose end for later.

RiotEmbed.js Coding Going Well

I’ve done more on riotEmbed today. Developed a system for hash code checking any scripts dynamically loaded. This should stop injection of just any code. There is also some removal of semantic and syntax errors based on ‘this’ and ‘call’, and some confusion between JSON, and JS which can contain roughly JSON, where ‘x: function()’ and ‘function x()’ are not quite the same.

I’ll do some planing of DB schema tomorrow …

The following link is a QUnit testing file I set up, which does no real tests yet, but is good for browser code testing, and much easier than the convoluted Travis CI virtual machine excess.

QUnit Testing

I’ve added in a dictionary acceleration method to the LZW, and called it PON (Packed Object Notation), which is only really effective when used after a BWT as in the pack method. Also some Unicode compression was added, which users of local 64 character sets will like. This leaves a final point in the compression layer of UCS-2 to UTF-8 conversion at the XmlHttpRequest boundary. By default this uses a text interface, and so the 16 bit characters native to JavaScript strings, are UTF-8 encoded and decoded at the eventual net octet streaming. As the PON is expected to be large (when compression is really needed) compared to any other uncompressed JSON, there is an argument to serialize for high dictionary codes (doubling of uncompressed JSON size, and almost a third of PON size), or post UTF to apply SUTF coding (cutting one third off compressed PON without affecting uncompressed JSON). The disadvantage to this is on the server side. The PON is the same, but the uncompressed JSON part will need an encoding and decoding function pair, and hence consume computational and memory buffer resources on the server.

As the aim of this compression layer is to remove load off the server, this is something to think about. The PON itself will not need that encoding taking off or putting on. As much of the uncompressed JSON part will be for SQL where clauses, and as indexes for arrays, the server can be considered ignorant of SUTF, as long as all literals used in the PHP script are ASCII. This is likely for all JSON keys, and literal values. So the second option on the client side of SUTF of the UTF would be effective. Some would say put Gzip on the server, but that would be more server load, which is to be avoided for scaling. I wonder if SUTF was written to accept use of the UInt8Array type?

Some hindsight analysis shows the one third gain is not likely to be realized except with highly redundant data. More realistic data has a wider than 64 dictionary code spread, and the middle byte of a UTF-8 is the easiest one to drop on repeats. The first byte contains the length indication, and as such the code would become much more complex to drop the high page bits, by juggling the lower four bits (0 to 3) in byte two for the lower four bits of byte one. Possibly a self inverse function … Implemented (no testing yet). The exact nature of JSON input to the pack function is next on the list client side, with the corresponding server side requirements of maintaining a searchable store, and distribution replication consistency.

The code spread is now 1024 symbols (maintaining easy decode and ASCII preservation), as anything larger would affect bits four and five in byte two and change the one third saving on three byte UTF-8 code points. There are 2048 dictionary codes before this compression is even used, and so only applies for larger inputs. As the dictionary codes are slightly super linear, I did have an idea to normalize them by subtracting a linear growth overtime, and then “invert the negative bulges” where lower and hence shorter dictionary codes were abnormal to the code growth trend. This is not applicable though as not enough information is easily available in a compressed stream to recover the coding. Well at least it gives something for gzip to have a crack at for those who want to burden the server.

Putting riot.js and less.js on WordPress

You’ll need the insert headers and footers plugin and then put the following in the footer. As the mount is done before the page bottom, there maybe problems with some aspects. In any pages or postings it then becomes possible to incorporate your own tag objects, and refer to them in a type definition. As the service of such “types” has to be free of certain HTML concerns, it’s likely a good idea to set up a github repo to store your types. CSS via less is in the compiler.

<script src="https://rawgit.com/jackokring/riot-embed/master/less.min.js"></script>
<script src="https://rawgit.com/jackokring/riot-embed/master/riot%2Bcompiler.min.js"></script>
<script src="https://rawgit.com/jackokring/riot-embed/master/underscore.min.js"></script>
<script>
  riot.mount('*');
</script>

Below should appear a timer … The visual editor can make your custom tags disappear.

Produced by the following …

<timer></timer>
<script src="hhttps://rawgit.com/jackokring/riot-embed/master/timer.tag" type="riot/tag"></script>

Or with the shortcoder plugin this becomes … in any post …

[sc name="riot" tag="timer"]

And … once in the shortcoder editor

<%%tag%%></%%tag%%>
<script src="https://rawgit.com/jackokring/riot-embed/master/%%tag%%.tag" type="riot/tag"></script>

This is then the decided dynamic content system, and the back end, and script services are being developed on the following GitHub repository. The project scope is describe there. Client side where possible, and server side for database replication consistency. The tunnel to the database will be the only PHP, and all page returns for ajax style stuff will be as JavaScript returns, so that no static json is sent.

GitHub Repo for Ideas

Next up storing PON …

Ethereum: A few days in

Monero works a treat, network sync OK. Ethereum on the other hand loses network sync, and easily lags thousands of blocks. This does prevent users entering the block lotto, for the storage of the blockchain and for such an interesting contract execution technology is failing from my prospective.

It is a beta test, but as all potential developers know, the test net which is not the live ETH, may be interesting, but it is not the eventual situation. The live ETH is where any development lives, and so is the one to have sorted before voiding any time investment in the protocol.

Proof of Burn

Consider the proof of burn algorithm as an effective cryptocoin algorithm. Then turn this on it’s head. If a pseudo random burn density is exhibited, previous mined coins, can be used as the burn sink. If this process is considered an also method of a mine success, then the burn creates an associated make in the protocol. As this can be a fixed multiplier, it maintains a mint standard. An “I could have burnt block, proof of could have” as it were. And get coins for doing it.

Something like Slimcoin with a kick in, kick out coins balancer keeping the fixed-sh multiplier net around a centroid. And having “modulable” kicks within limits. An “alternative slicer” can split this transact delta into a “modulable” stream for storing data. So if a method of “moving root block” can maintain a maximal blockchain size, data compression and a fixed bound cryptocoin with some injected coin total supply “chaos” results.

QED.

Of course the minimalist right to sign and signing would work in effect, but would encourage opening many intermediate wallets. Burning these intermediate wallets also provides the final coins for the real wallet. This in essence is proof of work. The next step of proof of stake would be to put any chosen coin into the mix for the hash, to gain right to sign, as this would have the effect of maintaining a faster hashing with more coins to try, and block number and chain check hash would prevent rehashing doubles. Or as some have it, selection of the coin for the right of hash, a la premium bonds. Publishing private key hashes of intermediate wallets such that the coin can be “seen” as mined and owned looks promising. And maybe a right to sign transfer dole for the under coined, although this would negate its use as a bit exchange.

That all explains why some of the best are hybrid in the mechanistic building of the blockchain. The proof of burn allows for coin supply modulation, and on fixed supply coins a possible under representation over time. The idea that calculating something useful would be good is good at heart, but misses the easy re-seeding duplication necessity, as the useful result is an open plain text. In many ways Ethereum will excel at derivatives …

The whole crypto-currency market is full of clones with different marketing and some have a slight tech advantage. The main differences which are coming up are memory intensive hashes, and even some block chain as random data hashes. Proof of stake looks good in principal, but does have a non mining aspect, and so is locked out for new users to mine without initial stake, and in some sense why would they mine? In such a situation it’s not mining but interest payments for service. Which devalues the market without effort, which is sort of not getting paid without equitable burn.

Proof of work maybe environmentally inefficient, but better that the box does it.

An Investigative Journey into Bitcoin

I am currently applying for a MasterCard from Wirex to investigate the possibility of taking payment in Bitcoin. I will let you know how this goes, as and when. Well someone has to do it. At present the Android app does not work with my device, even though the Android version is 4.4, and the app needs 4.2 and up. Strange.

Next I downloaded Ethereum to check out distributed processing ideas on the blockchain. An interesting idea, but quite a lot of documentation to look over. Bitcoin wallets are easy to find, even in mobile form, but Ethereum has the edge of a programmatic layer. I think the solution to the Ethereum block fork, is 1000 ETH in total haircut proportional split, and dump into the account. Although clever, the detach code was not worth “value gained”, and resulted in a experimental ETC net with lower volume.

The blockchain of Bitcoin is quite big, and there are some interesting competing technologies. I’m also checking out Monero for an alternative, and there are some interesting projects in alpha test. I mainly chose the ones I did for Bitcoin standard (an Android wallet), and because of high trading volume or useful extra technology layers.

So I’m up and micro mining ETH and XMR (Monero?) and will let you know. It’s not easy to set up, but with enough technical knowledge you can micro mine on a laptop. Some algorithms are more suited to this than others. Of note be aware I’m not mining Bitcoin. That would be a fool’s game without custom SHA-256 ASICs. Other coins to consider are LTC which is easier than BTC for micro mining, and some of the other highly traded which do not use SHA-256, although it does look like the LTE hash will place the mining in large corporate ASIC hands, as it did so happen for BTC. LTC has a good android wallet very similar to an excellent BTC wallet.

Something like C:\Windows\System32\cmd.exe /K C:\Users\user\Desktop\installed\Ethereum-Wallet-win64-0-8-2\resources\node\geth\geth.exe --mine could prove useful if you want an ETH miner launch from Windows desktop. You can’t run the wallet at the same time though. You may also need switches if you use GPU, and to alter the number of processor cores used.

MyMonero is an excellent idea, but getting Monero might be the most difficult part. It is effectively a most useful tool. This completes my investigation into cryptocoins. I hope you find some tools useful for you.

Models, Views and Controllers

Had an interesting meetup about Android wear. All good, nice introduction to the new Complexity API, and general revision on optimization for power and other resources. It did get me thinking about MVC design patterns, and how I’d love the implementation to be. Model instantiate, model update, a model save callable, a view renderer, and a control list builder from method parameter signature. Very minimalist.

Initial thoughts indicate that the ordering of the controller events should be done based on the order of method occurrence in the class file for say Java. Also supplying a store state to the instantiate, and as a parameter to the store callback would be good. Like a Bundle. The renderer just needs a graphics context, and the controller callbacks need a unique method name (with some printed name mangling), and a boolean parameter to either do, or request doable for a grey out, and returning a boolean done, or doable status.

Synchronized on thread can keep mutual criticals from interfering, and a utility set of methods such as useIcon on the doable control query path could set up the right icon within a list with icons. This is the simplest MVC factorization I can envisage, and there are always more complex ones, although simpler ones would be hard. For post-process after loaded, or pre-process before saving, there would be no essential requirement to sub factor, although overrides of methods in a higher level object MVC may benefit from such.

Method name mangling should really be combined with internationalization, which makes for at least one existential method name printed form and language triad. This sort of thing should not be XML, it should be a database table. Even the alternate doable test path could be auto generated via extra fields in this table and some extra generation code, to keep the “code” within the “list field” of the editor. I’ve never seen a text editor backed by a database. Perhaps it could help in a “pivot table/form” way.

Just to say, I do like languages with a ‘ (quote) or “setq” or “Hold” method of supplying un-evaluated closures as function or method arguments. JavaScript has anonymous functions. Yep, okie dokie.

Joined the Giffgaff Affiliate Program

K Ring Technologies Ltd., can benefit from you switching your mobile telecommunications provider. I, the director, have become not wanting to use my current provider, and have seen that Giffgaff offer controls to prevent premium rate service scams (an option in settings), and offer other good data bundles. Simply order a SIM via the top right link on the page.

After using up the 6GB 4G full speed allowance, the slower speed is sufficient for audio streaming, but video streaming depends on the quality settings, and often needs pausing to stream up the buffer. This is fine for most use, and the connection goes full 4G after midnight until around 7 o’clock. This is good for unsupervised downloads. All in all, looking like an excellent service compared to three, although slightly slower, and has a much better internet interface avoiding the extremely bad three customer service. I mean how many times does one have to ask? The web interface at giffgaff allows via settings to disable all premium rate, to avoid call back scams and such. The community forum is also good.

Now to obtain a PAC code from three. Wish me luck.

Been using it for some days now, and the first problem is a text to kill the always on data. It turns out that they do not perform any dynamic rate limiting to keep you within the 256 kb/s limit after your 6 GB has been used. This is strange and maybe relates to them subcontracting a carrier network. I’m testing out a Windows program NetLimiter to see how that goes.

NetLimiter has a nice feature to schedule rules on band limiting at the peak hours of the giffgaff account, so a minute before, and one minute after, the limit engages. It also has some nice firewall features. It was on beta test offer for registered users, and so for me it was free.

Web of Things

I went to an interesting meetup yesterday about WoT, with guest speakers from the W3C and Jeo. It was all about the efforts to standardize a representation of capability and interop of various IoT devices to bring them an off the shelf ease of integration. Very good it was. Anybody want to by a used car? Just a little humour there.

My ideas involved specifying an extension of SI units, and perhaps having just one reserved JSON key for placing all the automation behaviour under. Various extra things would have to be in the units, such a point, and range specifiers, maybe the dimensional count along with the units of a dimension. For more algorithmic actions, JavaScript functions could be specified, using a limited subset of definitions, and maybe a way of putting an active low level comment in such a way maybe using the JS reserved word native due to the common reserved word basis between Java and JS.

Quite a few hidden units would be needed, with character similar to mol, and likely u for micro, and maybe even getting each unit down to just one letter. Just how much would be possible with 26 units? Assuming things like space and time resolution (or the more operative protocol and not base quantities) could use the JSON key of such a unit, and the value could be expressed in the providing units. There could for example be provided 2 virtual units which expressed measure of input and output, so say 2 rad per second output could obviously be considered a motor speed controller. The $ key could express the primary capability, and the say the “per second output range” specifier in the sub object could specify the rad per second output per seconds range. And the “rad per second output range” specifier could key the speed range in rad per second.

It’s all vague at present, but that’s the meetup purpose. Nice time guys, and gals.

Latest CODEC Source GPL v2+

The Latest compression CODEC source. Issued GPL v2 or greater. The context can be extended beyond 4 bits if you have enough memory and data to 8 bits easily, and a sub context can be made by nesting another BWT within each context block, for a massive 16 bit context, and a spectacular 28 bit dictionary of 268,435,456 entries. The skip code on the count table assists in data reduction, to make excellent use of such a large dictionary possible.

The minor 4 bits per symbol implicit context, has maximum utility on small dictionary entries, but the extra 16 times the number of entries allows larger entries in the same coding space. With a full 16 bit context enabled, the coding would allow over 50% dictionary symbol compression, and a much larger set of dictionary entries. The skip coding on large data sets is expected to be less than a 3% loss. With only a 4 bit context, a 25% symbol gain is expected.

On English text at about 2.1 bits per letter, almost 2 extra letters per symbol is an expected coding. So with a 12 bit index, a 25% gain is expected, plus a little for using BWT context, but a minor loss likely writes this off. The estimate then is close to optimal.

Further investigation into an auto built dictionary based on letter group statistics, and generation of entry to value mapping algorithmicaly may be an effective method of reducing the space requirements of the dictionary.

A Server Side Java Jetty Persistent Socket API is in Development

I looked into various available solutions, but for full back end customization I have decided on a persistent socket layer of my own design. The Firebase FCM module supplies the URL push for pull connections (Android client side), and an excellent SA-IS library class under MIT licence is used to provide FilterStream compression (BWT with contextual LZW). The whole thing is Externalizable, by design, and so far looking better than any solution available for free. Today is to put in more thought on the scalability side of things, as this will be difficult to rectify later.

Finding out how to make a JavaEE/Jetty Servlet project extension in Android Studio was useful, and I’d suggest the Embedded Jetty to anyone, and the Servlet API is quite a tiny part of the full jetty download. It looks like the back end becomes a three Servlet site, and some background tasks built on the persistent streams. Maybe some extension later, but so far even customer details don’t need to be stored.

The top level JSONAsync object supports keepUpdated() and clone() with updateTo(JSONObject) for backgrounded two directional compressed sync with off air and IP number independent functionality. The clone() returns a new JSONObject so allowing edits before updateTo(). The main method of detecting problems is errors in decoding the compressed stream. The code detects this, and requests a flush to reinitialize the compression dictionary. This capture of IOException with Thread looping and yield(), provides for re-establishment of the connection.

The method updateTo() is rate regulated, and may not succeed in performing an immediate update. The local copy is updated, and any remote updates can be joined with further updateTo() calls. A default thread will attempt a 30 second synchronization tick, if there is not one in progress. The server also checks for making things available to the client every 30 seconds, but this will not trigger a reset.

The method keepUpdated() is automatically rate regulated. The refresh interval holds off starting new refreshes until the current refresh is completed or failed. Refreshing is attempted as fast as necessary, but if errors start occurring, the requests to the server are slowed down.

The method trimSync() removes non active channels in any context where a certainty of connectivity is known. This is to prevent memory leaks. The automatic launching of a ClientProcessor when a new client FCM idToken is received, looks nice, with restoration of the socket layer killing ones which are not unique. The control flow can be activated and code in the flow must be written such that no race condition exists, such as performing two wrights. A process boot lock until the first control flow activator provides for sufficient guard against this given otherwise sequential dependency of and on a set of JSONAsync objects.

The Build

Java development is under way with some general extension of Kodek and an outline of some classes to make a generalized Map structure. The corporate bank account is almost opened, and things in general are slow but well. There are new product development ideas and some surprising service options not in computing directly.

The directors birthday went off without too many problems. Much beer was consumed, and the UK weather decided to come play summer. Lovely. If you have any paying contracts in computer software, do inquire. Guaranteed is in depth analysis, technical appraisal, scientific understanding, a choice between fast done and well structured code, and above all humour unless paid (in advance) otherwise not to let flow from the fountain of chuckle.