A Modified ElGamal for Passwords Only

It occurred to me g does not need to be made public for ElGamal signing, if the value g^H(m) is stored as the password hash, generated by the client. Also (r, s) can be changed to (r, r^s) to reduce server verification load to one mod power and one precision multiply mod p, and a subtraction equality test. So on the creation of a new password (y, p, g^H(m)) is created, and each log in needs the client to generate a k value to make (r, r^s).

Password recovery would be a little complex, and involve some email backdoor based on maybe using x as a pseudo H(m), and verifying the changes via generation of y. This would of course only set the local browser to have a new password. So maybe a unique (y, p, g^H(m)) per browser local store used. Index the local storage via email address, and Bob’s yer been here before.

Also, the server can crypt any pending view using H(m) as a person’s private key, or the private key as a browser specific personal private key, or maybe even browser key with all clients using same local store x value. All using DH shared secrets. This keeps data in a database a bit more private, and sometimes encrypt to self might be useful.

Is s=H(m)(1-r)(k^-1) mod (p-1) an option? As this sets H(m)=x, eliminating another y, making (p, g^H(m)) sufficient for authentication server storage, and g is only needed if the server needs to send crypts. Along with r=g^k mod p, as some easy sign. (r, s) might have to be used, as r^s could be equated as modinverse(r) for an easy g^H(m) equality, and the requirement to calculate s from r^s is a challenge. So a secure version is not quite as server efficient.

In reality k also has to be computed to prevent (r, s) reuse. This requires the k choice is the servers. Sending k in plaintext defeats the security, so g is needed, to calculate g^z, and so g^(H(m))^z=k on both sides. A retry randomizer to hide s=0, and a protocol is possible.

This surpasses a server md5 of the password. If the md5 is client side, a server capture can log in. If the md5 is server side, the transit intercept is … but a server DB compromise also needs a web server compromise. This algorithm also needs a client side compromise, or email intercept as per.

The reuse of (r, s) can’t be prevented without knowing k, and hence H(m), therefore a shared secret as a returned value implies H(m) knowledge. So one mod power client side, and two server side.

g^k to client.
(g^k)^H(m) to server.
(g^H(m))^k = (g^k)^H(m) tests true.

Signatures are useless as challenge responses. The RSA version would have to involve a signature on H(m) and so need H(m) direct. Also, the function H can be quite interesting to study. The application of client side salt also is not needed on the server side as a decode key, and so not decoded there. DH is so cool like that. And (p-1) having a large factor is easy to arrange in the key generation. And write access is harder, most of the time, to obtain for data.

The storing of a crypt with the g^k used, locks it for H(m) keyed access. This could void data on a password reset, or a browser local storage reset, but does prevent some client’s data leak opertunities, such as DB decrypt keys. This would have multiple crypts of the symmetric key for shared data, but would this significantly reduce the shared key security? It would prevent new users accessing the said secured data without cracking the shared key. A locked share for private threads say?

Spamming your friends with g^salt and g^salt^H(m)?

The first one is a good idea, the second not so much. AI spam encoding g^salt to your and friends accounts. The critical thing is the friend doesn’t get the password. Assuming a bad friend, who registers and gets g^salt to activate, from their own chosen spoof password. An email does get sent to your email, to cancel the friend as an option, and no other problem exists excepting login to a primary mail account. As a spoof maybe would see the option to remove you from your own account.

The primary control email account would then need secondary authentication. Such as only see the spam folder, and know what to open first and in order. For password recovery, this would be ok. For initial registration, it would be first come first served anyhow.

Sallen-Key ZDF design

As part of the VST I am producing, I have designed an SK filter analog where the loading of the first stage by the second is removed to ease implementation. This only affects the filter Q which then has an easy translation of the poles to compensate. Implementing it as CR filter simulation reduces the basic calculation. This is then expanded on by a Zero Delay design, to better its performance.

ZDF filters rely on making a better integral estimate of the voltage over the sample interval to better calculate the linear current charge delta voltage. More of a trapezoid integration than a sum of rectangles. There is still some non-linear charge effects as the voltage affects the current. The current sample out now not known, just then needs a collection of terms to solve for it. Given a high enough sample rate, the error of linearity is small. Smaller than without it, and the phase response is flat due to the error being symmetric on the simulated capacitor voltage, and drive, and not just the capacitor voltage.

The frequency to the correct resistive constant is a good match, and any further error is equivalent to a high frequency gain reduction. There is a maximum frequency of stability introduced in some filters, but this is not one of those. Stability increases with ZDF. The double pole iteration is best done by considering x+dx terms and shifting the dx calculation till later. Almost the output of pole 1 is used to calculate most of the output of pole 2 multiplied by a factor, added on to pole 1 result, and pole 2 result then finally divided. These dx are then added to make the final outputs to memorize.

Cryptography





pub 4096R/8E2EAD58 2017-07-30 Simon Jackson

MIT key server

I’ve been looking into cryptography today and have developed a quartet filter of Java classes which do Diffe-Hellmann 2048 AES key transfer with AES encryption, and ElGamal signing. I chose not to use the shared secret method which uses both private keys, but went for a single secret symmetric AES version, with no back communication.

The main issues were with the signature fail stream close handling, to avoid data corruption via pre-verified data being read as active. An interesting challenge it has been. Other ciphers may have been more logical to some people, and doing the DH modPow by explicit coding was good for the code soul.

I think the discrete logarithm problem is quite secure, and has the square order of 1 prime versus the RSA 2 primes for the same key length. The elliptic curve methods are supposed harder, but the key topology has perhaps some backdoors deep in some later maths. The AES 128 has the lowest key complexity, and is the weakness in the scheme as wrote, so an interleave was made.

Java does make it difficult to build a standard enhanced symmetric cipher to fix this key short fall. Not impossible, but difficult. I may add an intermediate permutation filter to expand the symmetric key length. In the end, I decided on a split symmetric 256 key for AES, one for an outer ECB, and one for an inner CBC. The 16 byte IV was used as a step offset between them for a good 256 bit key effective.

The DH 2048 does not do key exchange with a common p or g, as this is what leads to the x collision over the same p problem. The original plan of public key with less exchange of ephemeral keys, is better. The time solve complexity is similar to a similar RSA. EC cryptography is cool, but still a little not understood, which is ironic for a mathematical field, with a little too much “under” information on how maybe to “find” holes.

The whole concept of perfect forward security, moves the game on to AES cracks based on initial stream content estimates. I’d suggest most of the original key exchange space is pre-computed for a simple 128 bit symmetric crack by now. Out of all the built in Java key types, DH is from my point of view the best for public key cryptography. RSA is cool too for sure, but division is a “relatively” simple operation. There is estimated a 20 bit advantage in the descrete log problem.

DH keys can be decoded to do ElGamal and basic public key secret generation. I’m not sure if DSA as an alternative just needs an extra factor, but Pollard rho triggers a future co p, q effect might be possible. P and Q in DSA are not independent. one is a multiple of the other almost …

Welcome to the national insecurity bank robbery. I know, the state via an affiliated plc, stole 1/4 of my income last year by getting me to destroy evidence.

The artificial limits on the key length and problems leading from that are in the JDK source. Also the deletion of keys from the memory pages when freed back to the OS, may be a problem. Quite a nice programming challenge to do. The Java libs have some strange restrictions on g. View the source.

/* Diffe-Hellmann Cipher AES. (C)2017 K Ring Technologies Ltd.
 A DH symmetric secret (1024 bit) for a 2* AES 128 (256 bit) interleave.
 The 16 byte offset interleave of the ECB is used for the IV slot
 of the CBC.
 */
package uk.co.kring.net;

import java.io.FilterInputStream;
import java.io.FilterOutputStream;
import java.math.BigInteger;
import java.math.SecureBigInteger;
import java.security.KeyPair;
import java.security.PublicKey;
import java.util.Arrays;
import javax.crypto.Cipher;
import javax.crypto.CipherInputStream;
import javax.crypto.CipherOutputStream;
import javax.crypto.interfaces.DHPrivateKey;
import javax.crypto.interfaces.DHPublicKey;
import javax.crypto.spec.IvParameterSpec;

/**
 *
 * @author Simon
 */
public final class DHCipher {
    
    public static final class InputStream extends FilterInputStream {

        public InputStream(java.io.InputStream in, KeyPair pub) throws Exception {
            super(in);
            BigInteger p, sk;
            SecureBigInteger x;
            int bytes;
            byte[] bb;
            DHPublicKey k = (DHPublicKey)pub.getPublic();
            p = k.getParams().getP();
            bytes = (p.bitLength() + 7) / 8;
            DHPrivateKey m =(DHPrivateKey)pub.getPrivate();
            x = new SecureBigInteger(m.getX());
            bb = new byte[bytes];
            in.read(bb);
            sk = new BigInteger(bb);
            if(!sk.abs().equals(sk)) {
                sk = new BigInteger(asLen(bb, bb.length + 1));
            }
            sk = sk.modPow(x, p);
            x.destroy();
            Cipher c = Cipher.getInstance("AES/ECB/PKCS5Padding");
            c.init(Cipher.DECRYPT_MODE, Keys.getAES(sk)[0]);
            in = new CipherInputStream(in, c);
            bb = new byte[24];
            in.read(bb);
            IvParameterSpec iv = new IvParameterSpec(Arrays.copyOfRange(bb, 8, 24));
            c = Cipher.getInstance("AES/CBC/PKCS5Padding");
            c.init(Cipher.DECRYPT_MODE, Keys.getAES(sk)[1], iv);
            in = new CipherInputStream(in, c);
            int i = in.read() % 23;
            in.skip(i);
        }
    }
    
    public static byte[] asLen(byte[] b, int len) {
        byte[] q = new byte[len];
        int j;
        for(int i = len - 1; i >= 0; i--) {
            j = i + b.length - q.length;
            if(j < 0) break;
            q[i] = b[j];
        }
        return q;
    }
    
    public static final class OutputStream extends FilterOutputStream {
        
        public OutputStream(java.io.OutputStream out, PublicKey pub) throws Exception {
            super(out);
            BigInteger y, g, p, sk;
            int bytes;
            byte[] bb;
            DHPublicKey k = (DHPublicKey)pub;
            y = k.getY();
            g = k.getParams().getG();
            p = k.getParams().getP();
            bytes = (p.bitLength() + 7) / 8;
            bb = new byte[bytes];
            Keys.getR().nextBytes(bb);
            BigInteger b = new BigInteger(bb);
            b = b.abs();
            bb = g.modPow(b, p).toByteArray();
            bb = asLen(bb, bytes);
            out.write(bb);//ephermeric
            sk = y.modPow(b, p);
            Cipher c = Cipher.getInstance("AES/ECB/PKCS5Padding");
            c.init(Cipher.ENCRYPT_MODE, Keys.getAES(sk)[0]);
            out = new CipherOutputStream(out, c);
            bb = new byte[24];
            Keys.getR().nextBytes(bb);
            out.write(bb);
            IvParameterSpec iv = new IvParameterSpec(Arrays.copyOfRange(bb, 8, 24));
            c = Cipher.getInstance("AES/CBC/PKCS5Padding");
            c.init(Cipher.ENCRYPT_MODE, Keys.getAES(sk)[1], iv);
            out = new CipherOutputStream(out, c);
            Keys.getR().nextBytes(bb);
            out.write((byte)(bb[0] % 23 + 23 * bb[23]));
            out.write(bb, 1, bb[0] % 23);
        }
    }
}

And the following code for clearing the key. Perfect forward security requires the same q to be used and a extra negotiation step. As it stands it’s not perfect, but as good as RSA, maybe slightly better.

/* Useful. (C)2017 K Ring Technologies Ltd.
 */
package java.math;

import java.util.Arrays;
import java.util.Vector;
import javax.security.auth.DestroyFailedException;
import javax.security.auth.Destroyable;
import uk.co.kring.net.Keys;

/**
 *
 * @author Simon
 */
public final class SecureBigInteger extends BigInteger implements Destroyable {
    
    private boolean d = false;
    private BigInteger ref;
    private static final Vector<SecureBigInteger> m = new Vector<SecureBigInteger>();
    
    private synchronized void handler(BigInteger val) {
        ref = val;
        m.add(this);
        System.arraycopy(val.mag, 0, mag, 0, mag.length);
    }
    
    public SecureBigInteger(BigInteger val) throws Exception {
        super(val.bitLength(), 1, Keys.getR());
        handler(val);
    }

    @Override
    public boolean isDestroyed() {
        return d;
    }

    @Override
    public void destroy() throws DestroyFailedException {
        Arrays.fill(mag, -1);
        m.remove(this);
        boolean in = false;
        Iterable i = (Iterable) m.iterator();
        for(Object x: i) {
            if(((SecureBigInteger)x).ref == ref) in = true;
        }
        if(!in) Arrays.fill(ref.mag, -1);//clear final instance
        d = true;
    }
    
    public void masterDestroy() throws DestroyFailedException {
        Iterable i = (Iterable) m.iterator();
        for(Object x: i) {
            ((SecureBigInteger)x).destroy();
        }
    }
}

There is also the possibility of G exchange, which would allow for calculation of new Y. This would have advantages of instancing a public key set, based on a 1 to 1 crypt role. The cracking of any public key thus only cracks one link and not the full set of peers to a node. In reality, g just alters y, and p does change the crypt. So an exchange of new y is required. There is a potential flaw in this swap if the new p and g are chosen in a cracked domain.

Allowing the client to select g in the server selected p domain is a minor concession to duplication, the server would have to return a new y. Such a thing might go DOS attack, and so should be restricted somewhat. If g is high in repeated factors, then the private key is effectively multiplied up and reduced mod p-1, and g is reduced to a lower base.

Disection of the Roots of the Mass Independent Space Equation

(v^2) v ‘ ‘ ‘ −9v v ‘ v ‘ ‘ 12(v ‘ ^3) (1−v^2/c^2)v ‘ (wv)^2
3 Constants 2 Constants 1 Constant 1 Constant
Square Power Linear Power Cubic Power Square and Quartic Power
3 Root Pairs 2 Roots 1 Root and 1 Root Pair 1 Root Pair and 2 Root Pairs
Energy and Force of Force Momentum, Force and Velocity of Force Cube of Force Force Energy
Potential Inertial Term Kinetic Inertial Term Strong Term Relativistic Force Energy Coupling
Gravity Dark Strong Weak EM

The fact there are 4 connected modes, as it were, imply there are 6 cross overs between modes of action, indicating that one term can be stimulated to convert into another term. The exact equilibrium points can be set as 6 differential equation forms, with some further analysis required of stable phase space bounds, and unstable phases at which to alter the balance to have a particular effect. Installing a constant (or function) of proportionality in each of the following balance equations would in effect allow some translation of one term ‘resonance’ into another.

v v ‘ ‘ ‘=−9 v ‘ v ‘ ‘ 3 Const and 1 root point
(v^2) v ‘ ‘ ‘=12(v ‘ ^3) 3 Const and 6 root points
v ‘ ‘ ‘=(1−v^2/c^2)v ‘ w^2 3 Const and 2 root points
−9v v ‘ ‘=12(v ‘ ^2) 2 Const and 2 root points
−9 v ‘ ‘=(1−v^2/c^2)(w^2) v 2 Const and 2 root points
12(v ‘ ^2)=(1−v^2/c^2)(wv)^2 1 Const and 12 root points

Another interesting point is 3 of the 6 are independent of w (omega mass oscillation frequency), and also by implication relativistic dependence on c.




The 3D Flavour Tensor in Analogue to the 4D of Einstein, for a 3D, 4D Curvature in Particle Physics

I like to keep updated about particle physics and LHC things, to quite an advanced level. My interest is in fields and their previous engineering value in radio waves and electronics in general. It makes sense to move to a tensor algebra in the 2+1 charge space, just as was done for the theory of gravitation. In some sense the conservation of acceleration becomes a conservation of net mapped curvature and it becomes funny via Noether’s Theorem.

CP violation as a horizon delta of radius of curvature from the “t” distance is perhaps relevant phrased as a moment of inertia in the 2+1, and its resultant geometric singular forms. This does create the idea of singular forms in the 2+1 space orbiting (or perhaps more correctly resonating) in tune with singularities in the 3+1 space. This interconnection entanglement, or something similar is perhaps connected to the “weak phase”.

So a 7D total space-time, with differing invariants in the 3D and 4D parts. The interesting thing from my prospective is the prediction of a heavy graviton, and conservation of acceleration. The idea that space itself holds its own shape without graviton interaction, and so conserves acceleration, while the heavy graviton can be a short range force which changes the curvature. The graviton then becomes a mediator of jerk and not acceleration. The graviton, being heavy would also travel slower than light. Gravity waves would then not necessarily need graviton exchange.

Quantization of theories has I think in many ways gone too far. I think the big breaks of the 21st century will be turning quantized bulk statistics into unquantized statistics, with quantization applied to only some aspects of theories. The implication is that dark matter is bent spacetime, without matter being present to emit gravitons. In this sense I predict it is not particulate.

So 7D and a differential phase space coordinate for each D (except time) gives a 13D reality. The following is an interesting equation I arrived at at one point for velocity solutions to uncertainty. I did not incorporate electromagnetism, but it’s interesting in the number of solutions, or superposition of velocity states as it were. The w being constant in the assumption, but purtubative expansion in it may be interesting. The units of the equation are conveniently force. A particle observing another particle would also be moving such, and the non linear summation for the lab rest frame of explanation might be quite interesting.

(v^2) v ‘ ‘ ‘−9v v ‘ v ‘ ‘+12(v ‘ ^3)+(1−v^2/c^2)v ‘ (wv)^2=0

With ‘ representing differential w.r.t. time notation. So v’ is acceleration and v” is the jerk. I think v”’ is called the jounce for those with a mind to learn all the Js. An interesting equation considering the whole concept of uncertain geometry started from an observation that relative mass was kind of an invariant, mass oscillation, although weird with RMS mass and RMS energy conservation, was perhaps a good way of parameterizing an uncertainty “force” proportional to the kinetic energy momentum product. As an addition it was more commutative as a tensor algebra. Some other work I calculated suggests dark energy is conservation of mass times log of normalized velocity, and dark matter could be conserved acceleration with gravity and the graviton operating to not bend space on density, but bend space through a short distance acting heavy graviton. Changes in gravity could thus travel slower than light, and an integral with a partial fourth power fraction could expand into conserved acceleration, energy, momentum and mass information velocity (dark energy) with perhaps another form of Higgs, and an uncertainty boson (spin 1) as well.

So really a 13D geometry. Each velocity state in the above mass independent free space equation above is an indication of a particle of differing mass. A particle count based on solutions. 6 quarks and all. An actual explanation for the three flavours of matter? So assuming an approximate linear superposable solution with 3 constants of integration, this gives 6 parameterized solutions from the first term via 3 constants and the square being rooted, The second tern involves just 2 of the constants for 2 possible offsets, and the third term involves just one of the constants, but 3 roots with two being in complex conjugation. The final term involves just one of the constants, but an approximation to the fourth power for 4 roots, and disappearing when the velocity is the speed of light, and so is likely a rest mass term.

So that would likely be a fermion list. A boson list would be in the boundaries at the discontinuities between those solutions, with the effective mass of the boson controlled by the expected life time between the states, and the state energy mismatch. Also of importance is how the equation translates to 4D, 3D spacetime, and the normalized rotational invariants of EM and other things. Angular momentum is conserved and constant (dimensionless in uncertain geometry),

Assuming the first 3 terms are very small compared to the last term, and v is not the speed of light. There would have to be some imaginary component to velocity, and this imaginary would be one of the degrees of freedom (leading to a total of 26). Is this imaginary velocity consistent with isospin?

Yang–Mills Existence and Mass Gap (Clay Problem)

If mass oscillation is proved to exist, then the mass gap can never be proved to be greater than zero as the mass must pass through zero for oscillation. This does exclude the possibility of complex mass oscillation, but this is just mass shrinkage (no eventual gap in the infinite time limit), or mass growth, and hence no minimum except in the big bang.

The 24 degrees of freedom on the relativistic compacted holographic 3D for the 26D string model, imply with elliptic functions, a 44 fold way. This is a decomposition into 26 sporadic elliptic patterns, and 18 generational spectra patterns. With the differential equation above providing 6*2*(2+1) combinations from the first three terms, and the 3 constants of integration locating in “colour space” through a different orthogonal basis. Would provide 24 apparent solution types, with 12 of them having a complex conjugation relation as a pair for 36. If this is the isospin solution, then the 12 fermionic solutions have all been found. That leaves the 12 bosonic solutions (the ones without a conjugate in the 3rd term generative), with only 5 (or if a photon is special 4) having been found so far. If the bosonic sector includes the dual rooting via the second term for spin polarity, then of the six (with the dual degenerates cancelled), two more are left to be found if light is special in the 4th term.

This would also leave 8 of the 44 way in a non existent capacity. I’d maybe focus on them being gluons, and consider the third still to be found as a second form of Higgs. OK.

Displacement Currents in Colour Space

Maybe an interesting wave induction effect is possible. I’m not sure what the transmitter should be made of. The ABC modulation may make it a bit “alternate” near the field emission. So not caused by bosons in the regular sense, more the “transition bosons” between particle states. The specific transitions between energy states may (although it’s not certain), pull the local ABC field in a resonant or engineered direction. The actual ABC solution of this reality has to have some reasoning for being stable for long enough. This does not imply though that no other ABC solutions act in parallel, or are not obtainable via some engineering means.

LZW (Perhaps with Dictionary Acceleration) Dictionaries in O(m) Memory

Referring to a previous hybrid BWT/LZW compression method I have devised, the dictionary of the LZW can be stored in chain linked fixed size structure arrays one character (the symbol end) back linking to the first character through a chain. This makes efficient symbol indexing based on number, and with the slight addition of two extra pointers, a set of B-trees can be built separated by symbol length to also be loaded in inside parallel arrays for fast incremental finding of the existence of a symbol. A 16 bucket move to front hash table could also be used instead of a B-tree, depending on the trade off between memory of a 2 pointer B-tree, or a 1 pointer MTF collision hash chain.

On the nature of the BWT size, and the efficiency. Using the same LZW dictionary across multiple BWT blocks with the same suffix start character is effective with a minor edge effect, rapidly reducing in percentage as the block size increases. An interleave reordering such that the suffix start character is the primary group by of linearity, assists in the scan for serachability. The fact that a search can be rephrased as a join on various character pairings, the minimal character pair can be scanned up first, and “joined” to the end of the searched for string, and then joined to the beginning in a reverse search, to then pull all the matches sequentially.

Finding the suffixes in the LZW structure is relatively easy to produce symbol codes, to find the associated set of prefixes and infixes is a little more complex. A mostly constant search string can be effectively compiled and searched. A suitable secondary index extension mapping symbol sequences to “atomic” character sequences can be constructed to assist in the transform of characters to symbol dictionary index code tuples. This is a second level table in effect, which can be also compressed for atom specific search optimization without the LZW dictionary loading without find.

The fact the BWT infers an all matches sequential nature, and a second level of BWT with the dictionary index codes as the alphabet could defiantly reduce the needed scan time for finding each LZW symbol index sequence. Perhaps a unified B-tree as well as the length specific B-tree within the LZW dictionary would be useful for greater and less than constraints.

As the index can become a self index, there maybe a need to represent a row number along side the entry. Multi column indexes, or primary index keys would then best be likely represented as pointer tuples, with some minor speed size data duplication in context.

An extends chain pointer and a first of extends is not required, as the next length B-tree will part index all extenders. A root pointer to the extenders and a secondary B-tree on each entry would speed finding all suffix or contained in possibilities. Of course it would be best to place these 3 extra pointers in a parallel structure so not to be data interleaved array of struct, but struct of array, when dynamic compilation of atomics is required.

The find performance will be slower than an uncompressed B-tree, but the compression is useful to save storage space. The fact that the memory is used more effectively when compression is used, can sometimes lead to improved find performance for short matches, with a high volume of matches. An inverted index can use the position index of the LZW symbol containing the preceding to reduce the size of the pointers, and the BWT locality effect can reduce the number of pointers. This is more standard, and combined with the above techniques for sub phases or super phrases should give excellent find performance. For full record recovery, the found LZW symbols only provide decoding in context, and the full BWT block has to be decoded. A special reserved LZW symbol could precede a back pointer to the beginning of the BWT block, and work as a header of the post placed char count table and BWT order count.

So finding a particular LZW symbol in a block, can be iterated over, but the difficulty in speed is when the and condition comes in on the same inverse index. The squared time performance can be reduced? Reducing the number and size of the pointers in some ways help, but it does not reduce the essential scan and match nature of the time squared process. Ordering the matching to the “find” with least number on the count makes the iteration smaller on average, as it will be the least found, and hence least joined. The limiting of the join set to LZW symbols seems like it will bloom many invalid matches to be filtered, and in essence simplistically it does. But the lowering of the domain size allows application of some more techniques.

The first fact is the LZW symbols are in a BWT block subgroup based on the following characters. Not that helpful but does allow a fast filter, and less pointers before a full inverse BWT has to be done. The second fact is that the letter pair frequency effectively replaces the count as the join order priority of the and. It is further based on the BWT block subgroup size and the LZW symbol character counts for calculation of a pre match density of a symbol, this can be effectively estimated via statistics, and does not need a fetch of the actual subgroup size. In collecting multiple “find” items correlations can also be made on the information content of each, and a correlated but rarer “find” may be possible to substitute, or add in. Any common or un correlated “find” items should be ignored. Order by does tend to ruin some optimizations.

A “find” item combination cache should be maintained based on frequency of use and execution time to rebuild result both used in the eviction strategy. This in a real sense is a truncated “and” index. Replacing order by by some other method of such as order float, such that guaranteed order is not preserved, but some semblance of polarity is run. This may also be very useful to reduce sort time, and prevent excessive activity and hence time spent when limit clauses are used. The float itself should perhaps be record linked, with an MTF kind of thing in the inverse index.

Implementation of Digital Audio filters

An interesting experience. The choice of FIR or IIR is the most primary. As the filtering is modelling classic filters, the shorter coefficient varieties of IIR are the best choice for me. The fact of an infinite impulse response is not of concern with a continuous stream of data, and coefficient rounding is not really an issue when using doubles. IIR also has the advantage of an easy Sallen-Key implementation, due to the subtraction and re-adding of the feedback component, with a very simple CR processing.

The most interesting choices are to do with the anti-alias filtering, as the interpolation filter, on up-sampling is an easy choice. As the ear is not really responsive to phase, all the effort should be on the pass band response levels, and a good stop band non response. A Legendre or Butterworth are the candidates. The concept of a characteristic sound enters the design process at this point, as the cascading of SK filter sections is conceptually useful to improve the -6 dB response at cut off. This is a trade off of 20 kHz to 22.05 kHz in the alias pass band, and greater attenuation in the above 22.05 kHz infinite stop desire. The slight greater desire of alias attenuation above pass band maximal flatness (for audio harmony) implies the Legendre filter is better for the purpose than Butterworth.

In the end, the final choice is one of convenience. and a 9th order filter was decided upon, with 4 times oversampling. The use of 4 times oversampling instead of 8 times oversampling increases the alias by an octave reduction. This fact under the assumption of at least a linear reduction in the amplitude of the frequency of the generator of an alias frequency, with frequency increase, just requires a -12 dB extra gain reduction in the alias filter for an effective equivalence to 8 times oversampling (the up to and the reflection back down to 6 + 6). The amount of GHz processing also halves. These facts then become constructive in the design, with the bulk alias close to the cut off, and the minor reflected alias-alias limit, not being too relevant to overall alias inharmonic distortion.

A triple chain of 3 pole Legendre filter sections is the decided design. The approximate -9 dB at the corner, allows for slightly shifting up the cut off and still maintaining a very effective stop band. Code reuse also aids in the I-cache usage for CPU effective use.  A single 3 pole Legendre is the interpolation up sample filter. The roll off for not using Butterworth does cut some high frequency content from the maximally flat, hence the concept of maximally flat, but it out performs a Bessel filter in this regard. It’s not as though a phasor or flanger needs to operate almost perfectly in the alias band.

Perhaps there is improvement to be made in the up sampling filter, by post up sample 88.2 kHz noise shaped injection to eliminate all error at 44.1 kHz. This may have a potential advantage to map the alias noise into the low frequencies, instead of encroaching from the higher frequencies to the lower, and for creating the alias as a reduction in signal to noise, instead of at certain inharmonic peaks. The main issue with this is the 44.1 kHz wave fundamental, seen as the amplitude ring modulation of the injected phase noise, by the 44.1 kHz stepped waveform between samples input. The 88.2 kHz “carrier” and the sidebands are higher in frequency, and of the same amplitude magnitude.

But as this is following for no 44.1 kHz error, the 88.2 kHz and sidebands are the induced noise, the magnitude of which is of the order of 1 octave up from the -3 dB roll at the corner, plus approximately the octave for a 3 pole filter, or about 36 dB cut of a signal 3/4 of the input amplitude. I’d estimate about -37 dB at 88.1 kHz, and -19 dB at 44.1 kHz. Post processing with a 9 pole filter, provides an extra -54 dB on down sampling, for an estimate of around -73 dB or greater on the noise. That would be about 12 bit resolution at 44.1 kHz increasing with frequency. All estimates, likely errors, but in general not a good idea from first principals. Given that the 44.1 kHz content would be very small though post the interpolation filter, -73 dB down from this would be good, although I don’t think achievable in a sensible manor.

Using the last filtered sample in as the reference for the present sample filtered in as a base line, the signal at 22.05 kHz would be smoothed. It would have a notch filter effect, by injecting quantization offset ringing noise at 88.2 kHz to cancel 22.05 kHz. The notch would likely extend down in frequency for maybe -6 dB at about 11 kHz. Perhaps in the end it is just better to subtract the multiplied difference between two up sample filters using different sinc spreading of a 1000 and a 1100 sample occupancy zero inter fill. Subtracting the alternates up conversion delta as it were.

There is potentially also an argument for having a second order section with damping factor near 0.68 and corner 22.05 kHz to achieve some normalisation from sinc up-sampling. This adds in an amount of Q such as to peak the filter cancelling the sinc droop, which would be about 3% at 4 times oversampling.

EDIT: Some of you may have noticed that the required frequencies for stable filtering are too high at 4 times oversampling. So unfortunate for the CPU load an 8 times oversample has to be used. The sinc error is less than 1% at this oversample, but still corrected in a similar way, and a benefit of 2 extra poles. Following this by a 0.1 dB 3 pole Chebyshev high pass which has been inverted, gives a reasonable 5 pole up sampling filter. The down sampling filter for code efficiency is a triple instance of the sample inverse Chebyshev, with the corner frequencies slightly offset to produce more individual zeros, and some spreading of the “ringing”. These 9 poles are enough to get the stop band ripple to be lower than a 16 bit resolution. Odd order inverse Chebyshev are essential for the reflected spectra to be continually decreasing in amplitude.

Power Systems

genLot of free energy videos about but does it actually work, or is it just virtual vapour wares? Here’s a highly unstable circuit I designed a few years ago. The magnetic balance is so fine, that an external field can throw the circuit into an unstable power spike. Then I went for an inductance modulation of lower scale, using a 3 phase (+++), to 1 phase (++-) arrangement, for greater stability. The difficulty with such devices is not the working, but the switch off without raising volts potential to any unlucky hands. This safety aspect is the ultimate reason of non use, and not as some suppose the disruptive effect on oil and other nuclear markets. Those markets may shrink, but will always be. The chemical industry will always have need of basic oil produce, and the lower short term profits of non burn, actually extend the future profits of chemical building. Transport is minor compared to health. The nuke industry could easily shrink, and still be big. The power waste of removing rods with 90% still effective power is a white wash of the electric power from a military objective. Reactors would be different for pure civil use.

downloadAmazing colours, but what’s it really about? The Pu problem of fast breed, and somehow there will never be less of it, just does not add up for efficient too cheap to meter power promises of not too long ago. There seems to be no real research on gamma cavity down conversion technology. I wonder how long it will be before the nova bomb. The effective slowing of light to lower than the black hole threshold, at Sun core. I think the major challenge is getting super dielectrics far enough into the Sun without melt. I suppose this is some hyperbole focus problem. One day people will understand the simple application of button technology, and the boxes will judge and provide on intent or not. It’s not like they won’t have a self interest.

Musical Research

I’ve looked into free musical software of late, after helping out setting up a PC for musical use. There’s the usual Audacity, and feature limited, but good LMMS. Then I found SuperCollider, and I am now hooked on building some new synthesis tools. I started with a basic FM synth idea, and moved on to some LFO and sequencing features. It’s very nice. There is some minor annoyance with the SC language, but it works very well, and has access to the nice Qt toolkit for making GUIs. So far I’ve implemented the controls as a MIDI continuous controller bus proxy, so that MIDI in is easy. There will be no MIDI out, as it’s not that kind of be all thing. I’ve settled on a 32 controls per MIDI channel, and one window focus per MIDI channel.

I’ve enjoyed programming it so far. It’s the most musical I’ve been in a long while. I will continue this as a further development focus.

The situation so far https://github.com/jackokring/supercollider-demos

The last three, are just an idea I had to keep the keyboard as a controller, and make all the GUI connection via continuous controllers only. I’ll keep this up to date as it develops.

Proof of Burn

Consider the proof of burn algorithm as an effective cryptocoin algorithm. Then turn this on it’s head. If a pseudo random burn density is exhibited, previous mined coins, can be used as the burn sink. If this process is considered an also method of a mine success, then the burn creates an associated make in the protocol. As this can be a fixed multiplier, it maintains a mint standard. An “I could have burnt block, proof of could have” as it were. And get coins for doing it.

Something like Slimcoin with a kick in, kick out coins balancer keeping the fixed-sh multiplier net around a centroid. And having “modulable” kicks within limits. An “alternative slicer” can split this transact delta into a “modulable” stream for storing data. So if a method of “moving root block” can maintain a maximal blockchain size, data compression and a fixed bound cryptocoin with some injected coin total supply “chaos” results.

QED.

Of course the minimalist right to sign and signing would work in effect, but would encourage opening many intermediate wallets. Burning these intermediate wallets also provides the final coins for the real wallet. This in essence is proof of work. The next step of proof of stake would be to put any chosen coin into the mix for the hash, to gain right to sign, as this would have the effect of maintaining a faster hashing with more coins to try, and block number and chain check hash would prevent rehashing doubles. Or as some have it, selection of the coin for the right of hash, a la premium bonds. Publishing private key hashes of intermediate wallets such that the coin can be “seen” as mined and owned looks promising. And maybe a right to sign transfer dole for the under coined, although this would negate its use as a bit exchange.

That all explains why some of the best are hybrid in the mechanistic building of the blockchain. The proof of burn allows for coin supply modulation, and on fixed supply coins a possible under representation over time. The idea that calculating something useful would be good is good at heart, but misses the easy re-seeding duplication necessity, as the useful result is an open plain text. In many ways Ethereum will excel at derivatives …

The whole crypto-currency market is full of clones with different marketing and some have a slight tech advantage. The main differences which are coming up are memory intensive hashes, and even some block chain as random data hashes. Proof of stake looks good in principal, but does have a non mining aspect, and so is locked out for new users to mine without initial stake, and in some sense why would they mine? In such a situation it’s not mining but interest payments for service. Which devalues the market without effort, which is sort of not getting paid without equitable burn.

Proof of work maybe environmentally inefficient, but better that the box does it.

Latest CODEC Source GPL v2+

The Latest compression CODEC source. Issued GPL v2 or greater. The context can be extended beyond 4 bits if you have enough memory and data to 8 bits easily, and a sub context can be made by nesting another BWT within each context block, for a massive 16 bit context, and a spectacular 28 bit dictionary of 268,435,456 entries. The skip code on the count table assists in data reduction, to make excellent use of such a large dictionary possible.

The minor 4 bits per symbol implicit context, has maximum utility on small dictionary entries, but the extra 16 times the number of entries allows larger entries in the same coding space. With a full 16 bit context enabled, the coding would allow over 50% dictionary symbol compression, and a much larger set of dictionary entries. The skip coding on large data sets is expected to be less than a 3% loss. With only a 4 bit context, a 25% symbol gain is expected.

On English text at about 2.1 bits per letter, almost 2 extra letters per symbol is an expected coding. So with a 12 bit index, a 25% gain is expected, plus a little for using BWT context, but a minor loss likely writes this off. The estimate then is close to optimal.

Further investigation into an auto built dictionary based on letter group statistics, and generation of entry to value mapping algorithmicaly may be an effective method of reducing the space requirements of the dictionary.