## K Ring CODEC Existential Proof

Find n such that (L(0)/L(1))^(2n+1) defines the number of bias elements for a certain bias exceeding 2:1. This is not the minimal number of bias elements but is a faster computation of a sufficient existential cardinal order. In fact, it’s erroneous. A more useful equation is

E=Sum[(1-p)*(1-q)*(2n-1)*(p^(n-1))*q^(n-1)+((1-p)^2)*2n*(q^n)*p^(n-1),n,1,infinity]

Showing an asymmetry on pq for even counts of containment between adding entropic pseudo-randomness. So if the direction is PQ biased detection and subsample control via horizontals and verticals position splitting? The bit quantity of clockwise parity XOR reflection count parity (CWRP) has an interesting binary sequence. Flipping the clockwise parity and the 12/6 o’clock location inverts the state for modulation.

So asymmetric baryogenesis, that process of some bias in antimatter and matter with an apparently identical mirror symmetry with each other. There must be an existential mechanism and in this mechanism a way of digitizing the process and finding the equivalents to matter and antimatter. Some way of utilizing a probabilistic asymmetry along with a time application to the statistic so that apparent opposites can be made to present a difference on some time presence count.

## ANSI 60 Keyboards? And Exception to the Rule?

More of an experiment in software completion. Jokes abound.

A keyboard keymap file for an ANSI 60 custom just finished software building. Test to follow given that cashflow prevents buy and building of hardware on the near time scale. Not bad for a day!

A built hex file for a DZ60 on GitHub so you don’t have to build your own with an MD5 checksum of 596beceaa446c1f1b55ee5e0a738f1c8 to verify for duelling the hack complexity. EDIT: version 1.7.2F (Enigma Bool Final Release). Development is complete. Only bug and documentation fixes may be pending.

It all stems from design and data entry thinking, and small observations like the control keys being on the corners like the thumbs to chest closeness of baby two-finger hackers instead of the alt being close in for the parallel thumbs of the multi-finger secretariat.

The input before the output, the junction of the output to our input. It’s a four-layer main layout with an extra for layers for function shift. Quite a surprising amount can be fit in such a small 60 keyspace.

The system allowing intercepts of events going into the widget yet the focus priority should be picking up the none processed outgoings. Of course, this implies the atom widget should be the input interceptor to reflect the message for outer processing in a context. This implies that only widgets which have no children or administered system critical widgets can processEventInflow while all can processEventOutflow so silly things have less chance of happening in the certain progress of process code.

Perhaps a method signature of super protected such that it has a necessary throws ExistentialException or such. Of course, the fact RuntimeException extends Exception (removing a code compilation constraint) is a flaw of security in that it should only have allowed the adding of a constraint by making (in the code compile protection against an existential) Exception extending RuntimeException.

Then the OS can automatically reflect the event unhandled back up the event outflow queue along with an extra event with a link to the child in, and an exposed list of its child widgets) to outflow. An OrphanCollector can then decide to still show the child widgets or not with the opportunity of newEventInflow. All widgets could also be allowed to newEventOutflowForRebound itself a super protected method with a necessary throws ExistentialException (to prevent injection of events from non administered. widgets).

An ExistentialException can never be caught in user code to remove the throws clause and use of super try requires executive privilege to prevent executive code from being loaded by the ClassLoader. It could run but in a lower protection ring until elevated.

## Time Series Prediction

Given any time series of historical data, the prediction of the future values in the sequence is a computational task which can increase in complexity depending on the dimensionality of the data. For simple scalar data a predictive model based on differentials and expected continuation is perhaps the easiest. The order to which the series can be analysed depends quite a lot on numerical precision.

The computational complexity can be limited by using the local past to limit the size of the finite difference triangle, with the highest order assumption of zero or Monti Carlo spread Gaussian. Other predictions based on convolution and correlation could also be considered.

When using a local difference triangle, the outgoing sample to make way for the new sample in the sliding window can be used to make a simple calculation about the error introduced by “forgetting” the information. This could be used in theory to control the window size, or Monti Carlo variance. It is a measure related to the Markov model of a memory process with the integration of high differentials multiple times giving more predictive deviation from that which will happen.

This is obvious when seen in this light. The time sequence has within it an origin from differential equations, although of extream complexity. This is why spectral convolution correlation works well. Expensive compute but it works well. Other methods have a lower compute requirement and this is why I’m focusing on other methods this past few days.

A modified Gaussian density approach might be promising. Assuming an amplitude categorization about a mean, so that the signal (of the time series in a DSP sense) density can approximate “expected” statistics when mapped from the Gaussian onto the historical amplitude density given that the motion (differentials) have various rates of motion themselves in order for them to express a density.

The most probable direction until over probable changes the likely direction or rates again. Ideas form from noticing things. Integration for example has the naive accumulation of residual error in how floating point numbers are stored, and higher multiple integrals magnify this effect greatly. It would be better to construct an integral from the local data stream of a time series, and work out the required constant by an addition of a known integral of a fixed point.

Sacrifice of integral precision for the non accumulation of residual power error is a desirable trade off in many time series problems. The inspiration for the integral estimator came from this understanding. The next step in DSP from my creative prospective is a Gaussian Compander to normalize high passed (or regression subtracted normalized) data to match a variance and mean stabilized Gaussian amplitude.

Integration as a continued sum of Gaussians would via the central limit theorem go toward a narrower variance, but the offset error and same sign square error (in double integrals, smaller but no average cancellation) lead to things like energy amplification in numerical simulation of energy conservational systems.

Today’s signal processing piece was sparseLaplace for finding quickly for some sigma and time the integral going toward infinity. I wonder how the series of the integrals goes as a summation of increasing sections of the same time step, and how this can be accelerated as a series approximation to the Laplace integral.

The main issue is that it is calculated from the localized data, good and bad. The accuracy depends on the estimates of differentials and so the number of localized terms. It is a more dimensional “filter” as it has an extra set of variables for centre and length of the window of samples as well as sigma. A few steps of time should be all that is required to get a series summation estimate. Even the error in the time step approximation to the integral has a pattern, and maybe used to make the estimate more accurate.

## AI and HashMap Turing Machines

Considering a remarkable abstract datatype or two is possible, and perhaps closely models the human sequential thought process I wonder today what applications this will have when a suitable execution model ISA and microarchitecture have been defined. The properties of controllable locality of storage and motion, along with read and write along with branch on stimulus and other yet to be discovered machine operations make for a container for a kind of universal Turing machine.

Today is a good day for robot conciousness, although I wonder just how applicable the implementation model is for biological life all the universe over. Here’s a free paper on a condensed few months of abstract thought.

Computative Psychoanalysis

It’s not just about IT, but thrashing through what the mind does, can be made to do, did, it all leverages information and modeling simulation growth for matched or greater ability.

Yes, it could all be made in neural nets, but given the tools available why would you choose to stick with the complexity and lack of density of such a soulution? A reasoning accelerator would be cool for my PC. How is this going to come about without much worktop workshop? If it were just the oil market I could affect, and how did it come to pass that I was introduced to the fall of oil, and for what other consequential thought sets and hence productions I could change.

One might call it wonder and design dress in “accidental” wreckless endangerment. For what should be a simple obvious benefit to the world becomes embroiled in competition to the drive for profit for the control of the “others” making of a non happening which upsets vested interests.

Who’d have thought it from this little cul-de-sac of a planetary system. Not exactly galactic mainline. And the winner is not halting for a live mind.

## Amiga on Fire on Playstore

The latest thing to try. A Cleanto Amiga Forever OS 3.1 install to SD card in the Amazon Fire 7. Is it the way to get a low power portable development system? Put an OS on an SD and save main memory? An efficient OS from times of sub 20 MHz, and 50 MB hard drives.

Is it relevant in the PC age? Yes. All the source code in Pascal or C can be shuffled to PC, and I might even develop some binary prototype apps. Maybe a simple web engine is a good thing to develop. With the low CSS bull and AROS open development for x86 architecture becoming better at making for a good VM sandbox experience with main browsing on a sub flavour of bloat OS 2020. A browser, a router and an Amiga.

Uae4arm is the emulation app available from the Playstore. I’m looking forward to some Aminet greatness. Some mildly irritated coding in free Pascal with objects these days, and a full GCC build chain. Even a licenced set of games will shrink the Android entertainment bloat. A bargain rush for the technical. Don’t worry you ST users, it’s a chance to dream.

Lazarus lives. Or at least Borglaz the great is as it was. Don’t expect to be developing video realtime code or supercomputer forecasts. I hear there is even a python. I wonder if there is some other nice things. GCC and a little GUI redo? It’s not about making replacements for Android apps, more a less bloat but a full do OS with enough test and utility grunt to make. I wonder how pas2js is. There is also AMOS 2.0 to turn AMOS source into nice web apps. It’s not as silly as it seems.

Retro minimalism is more power in the hands of code designers. A bit of flange and boilerplate later and it’s a consumer product option with some character.

So it needs about a 100 MB hard disk file located not on the SD as it needs write access, and some changes of disk later and a boot of a clean install is done. Add the downloads folder as a disk and alter the mouse speed for the plugged in OTG keyboard. Excellent. I’ve got more space and speed than I did in the early 90s and 128 MB of Zorro RAM. Still an AGA A1200 but with a 68040 on its fastest setting.

I’ve a plan to install free Pascal and GCC along with some other tools to take the ultra portable Amiga on the move. The night light on the little keyboard will be good for midnight use. Having a media player in the background will be fun and browser downloads should be easy to load.

I’ve installed total commander on the Android side to help with moving files about. The installed BSD socket library would allow running an old Mosaic browser, or AWeb but both are not really suited to any dynamic content. They would be fast though. In practice Chrome and a download mount is more realistic. It’s time to go Aminet fishing.

It turns out that is is possible to put hard files on the SD card, but they must be placed in the Android app data directory and made by the app for correct permissions. So a 512 MB disk was made for better use of larger development versions. This is good for the Pascal 3.1.1 version.

Onwards to install a good editor such as Black’s Editor and of course LHA and some other goodies such as NewIcons. I’ll delete the LCL alpha units from Pascal as these will not be used by me. I might even get into ARexx or some of the wonderfull things on those CD images from Meeting Pearls or a cover disk archive.

Update: For some reason the SD card hard disk image becomes read locked. The insistent gremlins of the demands of time value money. So it’s 100 MB and a few libraries short of C. Meanwhile Java N-IDE is churning out class files, PipedInputStream has the buffer to stop PipedOutputStream waffling on, filling up memory. Hecl the language is to be hooked into the CLI I’m throwing together. Then some data time streams and some algorithms. I think the interesting bit today was the idea of stream variables. No strings, a minimum would be a stream.

So after building a CLI and adding in some nice commands, maybe even JOGL as the Android graphics? You know the 32 and 64 bit restrictions (both) on the play store though. I wonder if both are pre-built as much of the regular Android development cycle is filled with crap. Flutter looks good, but for mobile CLI tools with some style of bitmap 80’s, it’s just a little too formulaic.

## Today’s Thought

```
import 'dart:math';

class PseudoRandom {
int a;
int c;
int m = 1 << 32;
int s;
int i;

PseudoRandom([int prod = 1664525, int add = 1013904223]) {
a = prod;
s = Random().nextInt(m) * 2 + 1;//odd
next();// a fast round
i = a.modInverse(m);//4276115653 as inverse of 1664525
}

int next() {
return s = (a * s + c) % m;
}

int prev() {
return s = (s - c) * i % m;
}
}

class RingNick {
List<double> walls = [ 0.25, 0.5, 0.75 ];
int position = 0;
int mostEscaped = 1;//the lowest pair of walls 0.25 and 0.5
int leastEscaped = 2;//the highest walls 0.5 and 0.75
int theThird = 0;//the 0.75 and 0.25 walls
bool right = true;
PseudoRandom pr = PseudoRandom();

int _getPosition() => position;

int _asMod(int pos) {
return pos % walls.length;
}

void _setPosition(int pos) {
position = _asMod(pos);
}

void _next() {
int direction = right ? 0 : walls.length - 1;//truncate to 2
double wall = walls[_asMod(_getPosition() + direction)];
if(pr.next() > (wall * pr.m).toInt()) {
//jumped
_setPosition(position + (right ? 1 : walls.length - 1));
} else {
//not jumped
right = !right;//bounce
}
}

void _prev() {
int direction = !right ? 0 : walls.length - 1;//truncate to 2
double wall = walls[_asMod(_getPosition() + direction)];
if(pr.s > (wall * pr.m).toInt()) {// the jump over before sync
//jumped
_setPosition(position + (!right ? 1 : walls.length - 1));
} else {
//not jumped
right = !right;//bounce -- double bounce and scale before sync
}
pr.prev();//exact inverse
}

void next() {
_next();
while(_getPosition() == mostEscaped) _next();
}

void prev() {
_prev();
while(_getPosition() == mostEscaped) _prev();
}
}

class GroupHandler {
List<RingNick> rn;

GroupHandler(int size) {
if(size % 2 == 0) size++;
rn = List<RingNick>(size);
}

void next() {
for(RingNick r in rn) r.next();
}

void prev() {
for(RingNick r in rn.reversed) r.prev();
}

bool majority() {
int count = 0;
for(RingNick r in rn) if(r._getPosition() == r.leastEscaped) count++;//a main cumulative
return (2 * count > rn.length);// the > 2/3rd state is true
}

void modulate() {
for(RingNick r in rn) if(r._getPosition() == r.leastEscaped) {
r._setPosition(r.theThird);
} else {
//mostEscaped eliminated by not being used
r._setPosition(r.leastEscaped);
}
}
}

class Modulator {
GroupHandler gh = GroupHandler(55);

int putBit(bool bitToAbsorb) {//returns absorption status
gh.next();
if(gh.majority()) {//main zero state
if(bitToAbsorb) {
gh.modulate();
return 0;//a zero yet to absorb
} else {
return 1;//absorbed zero
}
} else {
return -1;//no absorption emitted 1
}
}

int getBit(bool bitLastEmitted) {
if(gh.majority()) {//zero
gh.prev();
return 1;//last bit not needed emit zero
} else {
if(bitLastEmitted) {
gh.prev();
return -1;//last bit needed and nothing to emit
} else {
gh.modulate();
gh.prev();
return 0;//last bit needed, emit 1
}
}
}
}

class StackHandler {
List<bool> data = [];
Modulator m = Modulator();

int putBits() {
int count = 0;
while(data.length > 0) {
bool v = data.removeLast();
switch(m.putBit(v)) {
case -1:
break;
case 0:
break;
case 1:
break;//absorbed zero
default: break;
}
count++;
}
return count;
}

void getBits(int count) {
while(count > 0) {
bool v;
v = (data.length == 0 ? false : data.removeLast());//zeros out
switch(m.getBit(v)) {
case 1:
break;
case 0:
break;
case -1:
default: break;
}
count--;
}
}
}

```

## Statistics and Damn Lies

I was wondering over the statistics problem I call the ABC problem. Say you have 3 walls in a circular path, of different heights, and between them are points marked A, B and C. If in any ‘turn’ the ‘climber’ attempts to scale the wall in the current clockwise or anti-clockwise direction. The chances of success are proportional to the wall height. If the climber fails to get over a wall, they reverse direction. A simple thing, but what are the chances of the climber will be found facing clockwise just before scaling or not a wall? Is it close to 0.5 as the problem is not symmetric?

More interestingly the climber will be in a very real sense captured more often in the cell with the highest pair of walls. If the cell with the lowest pair of walls is just considered as consumption of time, then what is the ratio of the containment time over the total time not in the least inescapable wall cell?

So the binomial distribution of the elimination of the ’emptiest’ when repeating this pattern as an array with co-prime ‘dice’ (if all occupancy has to be in either of the most secure cells in each ‘ring nick’), the rate depends on the number of ring nicks. The considered security majority state is the state (selected from the two most secure cell states) which more of the ring nicks are in, given none are in the least secure state of the three states.

For the ring nick array to be majority most secure more than two thirds the time is another binomial or two away. If there are more than two-thirds of the time (excluding gaping minimal occupancy cells) the most secure state majority and less than two-thirds (by unitary summation) of the middle-security cells in majority, there exists a Jaxon Modulation coding to place data on the Prisoners by reversing all their directions at once where necessary, to invert the majority into a minority rarer state with more Shannon information. Note that the pseudo-random dice and other quantifying information remains constant in bits.

Dedicated to Kurt Godel … I am number 6. 😀

## BLZW Compression Java

Uses Sais.java with dictionary persist, and initialisation corrections. Also with alignment fix and unused function removed. A 32 bucket context provides an effective 17 bit dictionary key, using just 12 bits, along with the BWT redundancy model. This should provide superior compression of text. Now includes the faster skip decode. Feel free to donate to grow some open source based on data compression and related codecs.

```/* BWT/LZW fast wide dictionary. (C)2016-2017 K Ring Technologies Ltd.
The context is used to make 32 dictionary spaces for 128k symbols max.
This then givez 12 bit tokens for an almost effective 16 bit dictionary.
For an approximate 20% data saving above regular LZW.

The process is optimized for L2 cache sizea.

A mod 16 gives DT and EU collisions on hash.
A mod 32 is ASCII proof, and hence good for text.

The count compaction includes a skip code for efficient storage.
The dictionary persists over the stream for good running compression.
64k blocks are used for fast BWT. Larger blocks would give better
compression, but be slower. The main loss is the count compactio storage.

An arithmetic coder post may be effective but would be slow. Dictonary
acceleration would not necessarily be useful, and problematic after the
stream start. A 12 bit code is easy to pack, keeps the dictionary small
and has the sweet spot of redundancy in while not making large rare or
single use symbols.
*/

package uk.co.kring.net;

import java.io.EOFException;
import java.io.Externalizable;
import java.io.FilterInputStream;
import java.io.FilterOutputStream;
import java.io.IOException;
import java.io.ObjectInput;
import java.io.ObjectOutput;
import java.util.HashMap;

/**
* Created by user on 06/06/2016.
*/
public class Packer {

public static class OutputStream extends FilterOutputStream implements Externalizable {

byte[] buf = new byte[4096 * 16];//64K block max
int cnt = 0;//pointer to end
int[] dmax = new int[32];
HashMap<String, Integer> dict;

public OutputStream(java.io.OutputStream out) {
super(out);
}

@Override
public void readExternal(ObjectInput input) throws IOException, ClassNotFoundException {
}

@Override
public void writeExternal(ObjectOutput output) throws IOException {
output.writeObject(out);
output.write(buf);
output.writeChar(cnt);
output.writeObject(dict);
}

@Override
public void close() throws IOException {
flush();
out.close();
}

private byte pair = 0;
private boolean two = false;

private void outputCount(int num, boolean small, boolean tiny) throws IOException {
if(tiny) {
out.write((byte)num);
return;
}
if(small) {
out.write((byte)num);
pair = (byte)((pair << 4) + (num >> 8));
if(two) {
two = false;
out.write(pair);
} else {
two = true;
}
return;
}
out.write((byte)(num >> 8));
out.write((byte)num);
}

@Override
public void flush() throws IOException {
outputCount(cnt, false, false);//just in case length
char[] count = new char[256];
if(dict == null) {
dict = new HashMap<>();
for(int i = 0; i < 32; i++) {
dmax[i] = 256;//dictionary max
}
}
for(int i = 0; i < cnt; i++) {
count[buf[i]]++;
}
char skip = 0;
boolean first = true;
char acc = 0;
char[] start = new char[256];
for(int j = 0; j < 2; j++) {
for (int i = 0; i < 256; i++) {
if(j == 0) {
acc += count[i];
start[i] = acc;
}
if (count[i] == 0) {
skip++;
if (first) {
outputCount(0, false, true);
first = false;
}
} else {
if (skip != 0) {
outputCount(skip, false, true);
skip = 0;
first = true;
}
outputCount(count[i], false, true);
count[i] >>= 8;
}
}
if(skip != 0) outputCount(skip, false, true);//final skip
}
int[] ptr = new int[buf.length];
byte[] bwt = new byte[buf.length];

outputCount(Sais.bwtransform(buf, bwt, ptr, cnt), false, false);

//now an lzw
String sym = "";
char context = 0;
char lastContext = 0;
int test = 0;
for(int j = 0; j < cnt; j++) {
while(j >= start[context]) context++;
if(lastContext == context) {
} else {
lastContext = context;
outputCount(test, true, false);
sym = "" + bwt[j];//new char
}
if(sym.length() == 1) {
test = (int)sym.charAt(0);
} else {
if(dict.containsKey(context + sym)) {
test = dict.get(context + sym);
} else {
outputCount(test, true, false);
if (dmax[context & 0x1f] < 0x1000) {//context limit
dict.put(context + sym, dmax[context & 0x1f]);
dmax[context & 0x1f]++;
}
sym = "" + bwt[j];//new symbol
}
}
}
outputCount(test, true, false);//last match
if(!two) outputCount(0, true, false);//aligned data
out.flush();
cnt = 0;//fill next buffer
}

@Override
public void write(int oneByte) throws IOException {
if(cnt == buf.length) flush();
buf[cnt++] = (byte)oneByte;
}
}

public static class InputStream extends FilterInputStream implements Externalizable {

@Override
public void readExternal(ObjectInput input) throws IOException, ClassNotFoundException {
}

@Override
public void writeExternal(ObjectOutput output) throws IOException {
output.writeObject(in);
output.write(buf);
output.writeChar(idx);
output.writeChar(cnt);
output.writeObject(dict);
}

//SEE MIT LICENCE OF Sais.java

private static void unbwt(byte[] T, byte[] U, int[] LF, int n, int pidx) {
int[] C = new int[256];
int i, t;
//for(i = 0; i < 256; ++i) { C[i] = 0; }//Java
for(i = 0; i < n; ++i) { LF[i] = C[(int)(T[i] & 0xff)]++; }
for(i = 0, t = 0; i < 256; ++i) { t += C[i]; C[i] = t - C[i]; }
for(i = n - 1, t = 0; 0 <= i; --i) {
t = LF[t] + C[(int)((U[i] = T[t]) & 0xff)];
t += (t < pidx) ? 1 : 0;
}
}

byte[] buf = new byte[4096 * 16];//64K block max
int cnt = 0;//pointer to end
int idx = 0;
int[] dmax = new int[32];
HashMap<Integer, String> dict;

private boolean two = false;
private int vala = 0;
private int valb = 0;

private int reader() throws IOException {
if(i == -1) throw new EOFException("End Of Stream");
return i;
}

private char inCount(boolean small, boolean tiny) throws IOException {
if(small) {
if(!two) {
vala += (valc << 4) & 0xf00;
valb += (valc << 8) & 0xf00;
two = true;
} else {
vala = valb;
two = false;
}
return (char)vala;
}
int val = reader() << 8;
return (char)val;
}

public InputStream(java.io.InputStream in) {
super(in);
}

@Override
public int available() throws IOException {
return cnt - idx;
}

@Override
public void close() throws IOException {
in.close();
}

private void doReads() throws IOException {
if(available() == 0) {
two = false;//align
if(dict == null) {
dict = new HashMap<>();
for(int i = 0; i < 32; i++) {
dmax[i] = 256;
}
}
cnt = inCount(false, false);
char[] count = new char[256];
char tmp;
for(int j = 0; j < 2; j++) {
for (int i = 0; i < 256; i++) {
count[i] += tmp = (char)(inCount(false, true) << (j == 1?8:0));
if (tmp == 0) {
i += inCount(false, true) - 1;
}
}
}
for(int i = 1; i < 256; i++) {
count[i] += count[i - 1];//accumulate
}
if(cnt != count[255]) throw new IOException("Bad Input Check (character count)");
int choose = inCount(false, false);//read index
if(cnt < choose) throw new IOException("Bad Input Check (selected row)");
byte[] build;//make this
//then lzw
//rosetta code
int context = 0;
int lastContext = 0;
String w = "" + inCount(true, false);
StringBuilder result = new StringBuilder(w);
while (result.length() < cnt) {//not yet complete
char k = inCount(true, false);
String entry;
while(result.length() > count[context]) {
context++;//do first
if (context > 255)
throw new IOException("Bad Input Check (character count)");
}
if(k < 256)
entry = "" + k;
else if (dict.containsKey(((context & 0x1f) << 16) + k))
entry = dict.get(((context & 0x1f) << 16) + k);
else if (k == dmax[context & 0x1f])
entry = w + w.charAt(0);
else
throw new IOException("Bad Input Check (token: " + k + ")");
result.append(entry);
// Add w+entry[0] to the dictionary.
if(lastContext == context) {
if (dmax[context & 0x1f] < 0x1000) {
dict.put(((context & 0x1f) << 16) +
(dmax[context & 0x1f]++),
w + entry.charAt(0));
}
w = entry;
} else {
//context change
context = lastContext;
//and following context should be a <256 ...
if(result.length() < cnt) {
w = "" + inCount(true, false);
result.append(w);
}
}
}
build = result.toString().getBytes();
//working buffers
int[] wrk = new int[buf.length];
unbwt(build, buf, wrk, buf.length, choose);
if(!two) inCount(true, false);//aligned data
}
}

@Override
public int read() throws IOException {
try {
int x = buf[idx++];
doReads();//to prevent avail = 0 never access
return x;
} catch(EOFException e) {
return -1;
}
}

@Override
public long skip(long byteCount) throws IOException {
long i;
for(i = 0; i < byteCount; i++)
return i;
}

@Override
public boolean markSupported() {
return false;
}

@Override
public synchronized void reset() throws IOException {
throw new IOException("Mark Not Supported");
}
}
}
```