Skip to main content

Logarex logarithmic compression

Logarex is a patented technology for a new compression method based on numeric reduction of number string lengths using conversion of data to numerics and then to a series of logarithms that in turn represent a shorter (i.e. compressed) representation of the original number.

It is a compression solution that is "lossless" meaning that the compressed data when returned from a compressed state is identical to the original pre-compressed data.

There is very little research in this particular field since the great majority of research is focusing on "lossy" compression methods for video and music distribution such as streaming video. In the case of "lossy" compression the data need not be identical to the original pre-compressed data as long as a reasonable level of perceived quality of video or audio is evident.

While this may be possible with video or music data forms, critical data such as financial transactions, communications and computer applications will become useless unless faithfully and reliably reproduced with zero modification.

The first major milestone is a consistent demonstrable example of the technology, and then to refine the method to attain its maximum feasible potential. The technologies initial target customer base will include satellite, communications and media companies.

Comments

  1. Best of luck with this. I thought the NBN might finally enable us to do a complete backup of our computers' systems; so that in the event of total loss we could just quickly retrieve the whole system and resume work in a few moments. As it is now it takes me at least a whole weekend to do this from scratch.

    But the nbn plans i have seen restrict uploading as they do now with adsl. And data limits are also similar.

    A good compression system would solve at least this problem.

    ReplyDelete
  2. So, how's that first milestone going? Can Logarex outperform LZMA (lossless, free of patent encumbrance) yet?

    ReplyDelete
  3. I had the same idea and did some work on this a number of years ago (long before I'd heard about your patent) and eventually concluded that it is a lost cause.

    I know it might seem tantalisingly plausible to build up a bigger logarithm by blending several smaller best-fit ones from a combinatory table, but in practice you still end up needing a roughly equivalent-costing number of exponent pairs to represent all the detail in the mantissa of most data sets. It's tempting to get excited by the rare wins where a test file sits right on the borderline of a few cheap combos, but in the end information theory still hold true across the average, and there are just as many data sets that end up being costlier than binary.

    But hey, if anyone can crack it, I'm sure a man with your impressive pedigree can, and I wish you the best of luck. The closest I got was when I added a small function language, so that the combining of exponent pairs could use more than just addition. E.g. (A^B) {+-/*} (C^D). Also adding scalars. It didn't work out for me, but if you haven't tried that already, then it might give you some ideas.

    ReplyDelete

Post a Comment

Please feel free to contribute. Comments are moderated for fairness and language.

Real Time Web Analytics