From the haze of the smoke. And the mescaline. - The Airborne Toxic
Event "Wishing Well"
On 10/7/19 3:46 PM, Matthew Woehlke wrote:
On 04/10/2019 20.17, Roland Hughes wrote:
Even if all of that stuff has been fixed, you have to be absolutely
certain the encryption method you choose doesn't leave its own tell-tale
fingerprint. Some used to have visible oddities in the output when they
encrypted groups of contiguous spaces, nulls, etc. Plus, there are quite
a few places like these showing up on-line.
Again, though, it seems like there ought to be ways to mitigate this. If
I can test for successful decryption without decrypting the*entire*
message, that is clear grounds for improvement.
Sorry for having to invert part of this but the answer to this part
should make the rest clearer.
I've never once interjected the concept of partial decryption. Someone
else tossed that Red Herring into the soup. It has no place in the
conversation.
The concept here is encrypting a short string which is a "fingerprint"
known to exist in the target data over and over again with different
keys and in the case of some methods salts as well. These get recorded
into a database. If the encrypted message is in a QByteArray you use a
walking window down the first N bytes performing keyed hits to find a
matching sequence and when found you generally know what was used, sans
a birthday collision.
Some people like to call these "Rainbow Tables" but I don't. This is a
standard Big Data problem solving technique.
As for the nested encryption issue, we never did root cause analysis. We
encountered some repeatable issues and moved on. It could have had
something to do with the Debian bug where a maintainer "fixed" some
Valgrind messages by limiting the keys to 32768. We were testing
transmissions across architectures and I seem to remember it only broke
in one direction. Long time ago. Used a lot of Chardonnay to purge those
memories.
On 10/3/19 5:00 AM, Matthew Woehlke wrote:
On 01/10/2019 20.47, Roland Hughes wrote:
To really secure transmitted data, you cannot use an open standard which
has readily identifiable fields. Companies needing great security are
moving to proprietary record layouts containing binary data. Not a
"classic" record layout with contiguous fields, but a scattered layout
placing single field bytes all over the place. For the "free text"
portions like name and address not only in reverse byte order, but
performing a translate under mask first. Object Oriented languages have
a bit of trouble operating in this world but older 3GLs where one can
have multiple record types/structures mapped to a single buffer (think a
union of packed structures in C) can process this data rather quickly.
How is this not just "security through obscurity"? That's almost
universally regarded as equivalent to "no security at all". If you're
going to claim that this is suddenly not the case, you'd best have
some *really* impressive evidence to back it up. Put differently, how
is this different from just throwing another layer of
encry^Wenciphering on your data and calling it a day?
_ALL_ electronic encryption is security by obscurity.
Take a moment and let that sink in because it is fact.
Your "secrecy" is the key+algorithm combination. When that secret is
learned you are no longer secure. People lull themselves into a false
sense of security regurgitating another Urban Legend.
Well... sure, if you want to get pedantic. However, as I see it, there
are two key differences:
- "Encryption" tries to make it computationally hard to decode a message.
- "Encryption" (ideally) uses a different key for each user, if not each
message, such that compromising one message doesn't compromise the
entire protocol. (Okay, granted this isn't really true for SSL/TLS
unless you are also using client certificates.)
Thanks for agreeing.
...and anyway, I think you are undermining your own argument; if it's
easy to break "strong encryption", wouldn't it be much *easier* to break
what amounts to a basic scramble cipher?
No. This technique is cracking without cracking. You are looking for a
fingerprint. That fingerprint is the opening string for an xml document
which must be there per the standard. For JSON it is the quote and colon
stuff mentioned earlier. You take however many bytes from the logged
packet as the key size the current thread is processing and perform a
keyed hit against the database. If found, great! If not, shuffle down
one byte and try again. Repeat until you've exceeded the attempt count
you are willing to make or found a match. When you find a match you try
key or key+salt combination on the entire thing. Pass the output to
something which checks for seemingly valid XML/JSON then either declare
victory or defeat.
If the fingerprint isn't in the data, you cannot use this technique. You
can't, generally, just Base64 your XML/JSON prior to sending it out
because they usually create tables for that too or at least I would.
These attacks aren't designed for 100% capture/penetration. The workers
are continually adding new rows to the database table(s). The sniffed
packets which were not successfully decrypted can basically be stored
until you decrypt them or your drives fail or that you decide any
packets more than N-weeks old will be purged.
These are very targeted types of attacks, sniffing packets heading to a
known target which has published an XML or JSON API, typically for CC
transactions, but really could be anything. It could be mortgage
applications using XML/JSON and a known set of endpoints.
The success rate of such an attack improves over time because the
database gets larger by the hour. Rate of growth depends on how many
machines are feeding it. Really insidious outfits would sniff a little
from a bunch of CC or mortgage or whatever processing services,
spreading out the damage so standard track back techniques wouldn't
work. The only thing the victims would have in common is that they used
a credit card or applied for a mortgage but they aren't all from the
same place.
One of the very nice things about today's dark world is that most are
script-kiddies. If they firmly believe they have correctly decrypted
your TLS/SSL packet yet still see garbage, they assume another layer of
encryption. They haven't been in IT long enough to know anything about
data striping or ICM (Insert Character under Mask).
So... again, you're proposing that replacing a "hard" (or not, according
to you) problem with an *easier* problem will improve security?
I suppose it might *in the short term*. In the longer term, that seems
like a losing strategy.
No. I'm proposing you take the fingerprints out of the underlying data
so you quit weakening your "hard" problem.
He came up with a set of test cases and sure enough, this system which
worked fine with simple XML, JSON, email and text files started
producing corrupted data at the far end with the edge cases.
Well, I would certainly be concerned about an encryption algorithm that
is unable to reproduce its input. That sounds like a recipe guaranteed
to eventually corrupt someone's data.
How many encryption algorithms are tested with N different encryption
algorithm wrappers? I would wager an entire case of Diet Mt. Dew the
answer would be none.
How many test wrapping data in N layers of encryption on a Debian based
Linux platform, then sending it to a Windows platform, MAC platform and
RPM based Linux platform for decryption? (If you want to have real fun
add Unisys and an IBM mainframe in here.)
That last part, rotating around the starting machine, was where we hit
the issues. With a single layer nothing went wrong. Once you got past 3
weirdness and corruption started happening. Not all of the time and not
in all directions. We didn't waste time figuring it out.
When was aes128_cbc actually added to openssl? According to the
changelog it looks like 1.0.0 in March 2010. Does that sound correct?
https://www.openssl.org/news/changelog.html
If correct that means they (the nefarious people) could have started
their botnets, or just local machines, building such a database by some
time in 2011 if they were interested. That's 8+ years. They don't _need_
100% coverage. They just need to "sniff" a CC or mortgage processor
which has a reasonable volume of transactions to have successes. The
thing runs mostly automated. Despite the trademark infringement, it is
kind of a "set it and forget it" thing once written.
People keep trying to focus on some Utopian thing where brute force is
the only way and each attempt will only find success with the absolute
last key+salt possible. That's not reality man. The attacks which
succeed once in a while are the ones which really hurt you. Putting it
another way, if a credit card you only use for gasoline is the only
packet they manage to crack this week, you don't care just how "hard"
the encryption is supposed to be when you get the statement and find
this card is maxed out buying Coach handbags, Gucci whatever and a
really great computer, all of which are now for sale on
eBay/Amazon/LetGo/whereever. You just care how much of it you might get
stuck with and just how much hassle it will be to get a new card.
How many CC companies ever come back and tell you where your card was
compromised or how? American Express has never told me. Every time I'm
forced to order a hard to find part from PartsGeek I get a new card from
Amex within 2 weeks.
We haven't even gotten to the possibility of the random number generator
having a "time of day" issue. Identifying something like that could
dramatically reduce the potential number of salts you need to try.
--
Roland Hughes, President
Logikal Solutions
(630)-205-1593
http://www.theminimumyouneedtoknow.com
http://www.infiniteexposure.net
http://www.johnsmith-book.com
http://www.logikalblog.com
http://www.interestingauthors.com/blog
_______________________________________________
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest