As I’ve mentioned maybe once or twice before, I like myAutToExe a good deal. It’s great for tinkering around with AutoIt programs that have been “secured” by compiling to tokens. In some situations, being able to decompile these scripts is an absolute necessity Continue reading
I’d like to thank everybody on the forums who helped out in the investigation, and I hope this can be a lesson to greedy bot authors everywhere.
As I previously mentioned, I like AutoIT a great deal, but I like decompiling other people’s stuff even better. Just a couple days ago, the genius who brought us the only 3.2.6 decompiler released a new version, 2.2. I haven’t kept up like I should, but if cw2k ever gives me the okay, I’d gladly keep up a mirror as he releases new versions. For now, at least, I’ll just try to remember to check back regularly and grab the latest versions.
I’m a huge fan of AutoIt – I think the program is a wonderful tool for administrators as well as casual programmers who just like to mess with stuff. However, I recently discovered that the developer of AutoIt, in an ongoing quest for “security,” has disabled the ability to reverse-engineer autoit scripts!
Now, I’m all for security when people write software, but giving scripts this level of security actually introduces security risks! I just don’t like running some random AutoIt script without being able to look at the source code. If I run a “normal” compiled app, my virus scanner will generally let me know if it’s safe to run. But a script written in AutoIt can so easily be a trojan (or other destructive tool) and nothing can likely catch it because of the nature of scripted programs.
Yes, I’m aware somebody could write a C++ app that’s just as dangerous as AutoIt can produce, but this requires a lot more intelligence and effort than just writing an AutoIt script. AutoIt is made to be pretty friendly for non-programmers.
So to me, this is an absurd limitation – all scripts should be reversible for the sake of security! Luckily for the world, somebody has written a usable decompiler called myAutToExe, and I’m providing a local copy of 2.00 Alpha since the site is hard to find and hosted on Angelfire. For the original and latest versions, go to http://defcon5.biz/phpBB3/viewtopic.php?f=5&p=5735.
Note: I haven’t contacted the author of myAutToExe about this (the page has no contact information). I can’t imagine he’d complain, but if anybody knows a way in which I can contact him, I’m all ears. I’d love to set up a proper mirror.
UPDATE: I’ve updated the link to the latest version since being contacted by cw2k (the author of myAutToExe).
UPDATE UPDATE: Fixed the URL for getting the latest version – apparently the forum site moved.
I have been dealing with XSS at my so-called “real job” recently, and it has come to my attention that a lot of people in this world are under the mistaken impression that it’s better to do “input filtering” than “output filtering”. As I pretty much came up with these terms myself (they may or may not exist elsewhere; I’m just too lazy to find out), I’ll define them for you:
Input Filtering: Scrubbing XSS-dangerous data out of your input before it gets saved anywhere.
Output Filtering: Scrubbing XSS-dangerous data only upon display.
After echoing user parameters is fixed, you have to look at how you display stored data. This is where the type of scrubbing comes into play – do you scrub the data before storing to your database / file system? Or do you only scrub when you’re about to display the data?
I will soon prove that input scrubbing is for pansies who are paranoid and tend to make up pathetic lies about their imaginary 20-year-old girlfriends.
Why input filtering is inefficient
- It’s bad to store data in a display-specific way (have to unencode when displaying PDF, email/text reports, etc).
- You have to modify other areas of code than just DB storage, such as
searching (search for “<blah>” won’t yield “<blah>”), which may not
be immediately obvious.
- You could just auto-filter all incoming data, but there may be cases where you really can’t or don’t want to. I personally dislike blind filtering like this unless there is no better option.
- If you have existing data, you have to check it for pre-existing problems. With large data, this can be very slow.
- If you’re truly paranoid (as I am), you still won’t trust the DB data and will need to find a way to have input filtering work nicely with output filtering. This is a whole lot more work than just doing one or the other.
- If you use a good MVC system like Rails, you can actually escape all text
fields as they’re read from the database if you want. With a carefully
written ActiveRecord plugin to Rails, I’d bet you could have all accessors
automatically escape their data if it’s textual. And even provide a method
for getting at the unsafe data.
- I still don’t like such blind scrubbing logic, but better to blindly display scrubbed data than to blindly alter data before it hits your database.
Why input filtering can be dangerous
- If you can’t trust your programmers to do proper output filtering, why would
you trust them to do proper input filtering?
- Yes, input filtering is liable to be in fewer locations, particularly if you filter all incoming parameters, but it’s still not a silver bullet, and has a lot of long-term risks when mistakes do happen (read on for details).
- Compare to output filtering in terms of the bug factor:
- Bugs will happen. If you truly believe you don’t ever write code with bugs, then by all means ignore this section. I’ll get a good laugh when you tell me about your first big project that went from a two-week estimate to a six-month half-finished-and-then-rewritten-from-scratch project from hell.
- If you mess up an output filter:
- You probably have an issue that’s confined to a single area on your site (the area you messed up).
- You do a quick hotfix, and the site is once again safe.
- If you mess up an input filter:
- Every area of the site that contains the data you missed is at risk.
- You do a quick hotfix to stop anything new from coming in, but existing data is still currently at risk.
- You find and quickly fix the very obvious offending data in the database.
- You wait until the site is slow (or you can take it down) and run through all data entered since you suspect the exploit came into existence, fixing it record by record.
- If future XSS issues arise, you have to retroactively fix your old data
again instead of merely fixing your filter.
- New xss vulnerabilities won’t arise, you say? Maybe so, but how many times have we computer folk shot ourselves in the foot with presumptions about the future? (We’ll never need more than 640k memory, nobody will still be using this old software when y2k finally hits, etc)
- Note that XSS attackers have discovered that in some cases, the backtick character (`) will work to do specific JS-oriented attacks. This is not a character that is scrubbed by at least two different html_escape types of functions that I know of. Enjoy retroactive data-fixing? Me too!
Why input filtering can be better (and my incessant arguments to prove that it really can’t)
The most logical argument I was given is that in a large enterprise, control of data output gets pretty tricky. So as far as I’m concerned, large companies are the only place the below issues even have a tiny bit of merit. And even then….
- In a large enterprise, you know that nobody will inadvertantly display
unsafe data, because all data is safe.
- Unless of course somebody writes a program that makes changes to the DB. Less likely than a rogue program that merely displays data, I agree, but still a possibility. In an organization that’s big enough to be at risk of multiple apps reading data that wasn’t built by the “proper” people, I’d say there is a definite risk that apps will be writing to said data as well.
- At my job, there have been several cases where somebody who wasn’t even a part of IT (a manager and a content designer) modified data directly in SQL, bypassing any hope of safeguards.
- In a large enterprise, I think it’s even more important than ever that all access to the DB goes through knowledgeable IT staff. Yes, I know this is a pipe dream, but I still think proper procedures can allow output filtering to be the clearly correct option.
- You can detect problems with input filters more easily, because you have the
data that could be dangerous right at your fingertips. If need be, write a
program that periodically audits your data to check for unsafe characters.
If you messed up an input filter, this program can save you.
- Good testing does this same thing for output filtering. It’s far harder to write perfect tests for your app’s HTML output than to write a program to audit the DB for unsafe data, but it’s still the right way to do it.
- Resource usage is wasteful in my opinion, when the resources are being used to prevent data from simply being stored in its original state.
- If you have a large amount of data that is changing all the time, this solution may simply not be doable. In what situation would you have that much data changing that regularly? Oh… I don’t know… maybe in a big corporate enterpise?
I belong to a forum for web game developers and I recently posted about how to keep one’s game from being a target of the most common security problems. The information seems, to me, to be so obvious, but apparently there’s a lot of ignorance about how to secure an application as well as why it matters. So let me relate a tale of exactly why website security is so damned important.
I relate the details of this hackery not only to brag (I am proud to have hacked this game so thoroughly even if it wasn’t much of a challenge), but also to point out how “minor” security issues can destroy a game (or other web application) completely. This is not a “Security on the Web 101″ as much as proof that bad security can destroy a good concept.
A long time ago, in a land far far away, there lived a game designer. We’ll call him “Alphonso”. Because that’s his name. Makes things simplest that way, really….
Alphonso had a grand idea for a mobster-oriented PBBG (Persistent Browser-Based Game). His idea was pretty decent overall, and he opened up the short-lived site Mobster World. Don’t bother looking for the site, it died a long time ago. And this story will tell you why.
In this game, Alphonso had built a few key areas that I’m going to cover: * Logging in * Jobs * Buying Items * Shooting a player * Reading “private” messages * Sending messages
This was the most absurd area. You’d put in your name, password, and the CAPTCHA image to prove you weren’t a bot. The security image was a collection of three digits. The images were shown to you on the form and you’d enter the digits in the appropriate field. Fine and dandy up to this point. Problem was, the images were shown separately (CAPTCHAs usually show a single image that contains all the numbers/letters) and this allows an attacker to analyze the filenames of each image to figure out which corresponds to a given number. But worse, the filenames were #.jpg. That is, the image representing “1″ was “1.jpg”. So I could look at the form and see the <image> tags to know exactly what I needed to type – very easy for a bot to do, by the way.
When I thought the login couldn’t get any worse, I noticed a “hidden” field. In HTML, a hidden field doesn’t mean the user cannot see it! It merely means the field isn’t immediately visible! This particular hidden field contained the exact security string Alphonso was expecting. So my bot was very quickly able to grab the expected CAPTCHA string and supply it. The CAPTCHA succeeded in stopping only the most inexperienced of hackers, and those ones weren’t likely to know how to script a bot properly anyway.
Also please note that having a CAPTCHA may indeed stop bots (though rumor has it good anti-CAPTCHA technology is more accurate than most users), but it may also annoy regular users, especially those with minor-or-worse visual problems. If you insist on a CAPTCHA, at least make it accessible to all users.
There were two kinds of jobs, where a player could perform a job to gain stats and/or money. The “big jobs” were dangerous (rob a bank, steal a car, etc), and could land you in jail if you failed. The “small jobs” weren’t dangerous – things like petty theft, bar fights, etc. They didn’t have the same rewards, and therefore I didn’t bother to try hacking them.
Each job would give you two or three options for how to perform the job, usually a situation where you could choose to be stealthy or direct or whatever (Robbing a bank via the front door or back door, and other totally unimportant crap). But when the page was created, the actions were pre-determined. The html would have hidden form fields saying whether a given button was going to be successful. This meant when I chose to rob a bank, the “front door” option would already be set up via hidden fields to succeed or fail. So one could very easily submit the form with any button they wanted so long as they set the value of that hidden field to “1″ instead of “0″. Since big jobs were so risky, success yielded pretty good cash. 100% success meant tons of money and no time wasted waiting for your jailtime to end.
Moral: Don’t set up future actions in hidden fields! It’s stupid and very easy to hack! All Alphonso needed to do was do the random check after the form was submitted and this issue would not have existed.
But why bother getting a bunch of cash? What a waste of time! Because the game was so poorly scripted so far, I decided to look at buying items, and sure enough I was confronted with awesome hidden fields. The hidden fields would tell the game that a certain button would buy item X at price Y. Hack the form via a bot, and you could buy any item for $0. This meant the most powerful gun for $0. All the ammo you wanted for $0. Bodyguards for $0 a piece. Bulletproof vests? $0. Medical kits: special limited time offer, two for $0!
So you buy great items for free and you realize you don’t need money.
This is a clear case of relying too heavily on the form to determine what’s going to happen. Instead of having the form store the cost of things, it should be stored somewhere on the server – database, bdb file, whatever. User buys an item, sends that item’s ID to the server, and the server pulls the price from the only source it can trust: itself.
Shooting a player
Mobster World was written to stress uneasy alliances. People start shooting each other and the game degrades into total chaos if some of the mob families (essentially in-game alliances) don’t force order by disciplining their members. Because of this, shooting a player was usually not a good idea without a good-sized family behind you. Unless, of course, you could cheat.
The “shoot a player” area was also plagued with hidden fields. By setting the %-to-hit field to 100, all shots would hit. The best gun only hit 50% of the time, meaning you could fire off a shot and do no damage, but still have all sorts of consequences. And if your target had bodyguards or armor (both were essentially just ways to increase bullet-taking ability), your shot could be totally wasted. So again, shooting was usually limited to a family trying to take down another family. But with a 100% chance to hit, free healing (bodyguards, body armor, medical kits), and free ammo, a cheater could do tremendous damage relatively safely.
The game allowed a shot every 10 minutes, so even a cheater had his limits, but with a single bot I was able to knock an unsuspecting don (leader of an entire mob family) down to 6 bodyguards (from 12) in a matter of about two hours. A smarter cheater could have run multiple bots and destroyed an entire in-game alliance in an hour or less.
This is exactly the same as above – there was no need for the form to ever know the chance of a successful shot. Calculate that on the server and only on the server. Yeah, you might want to display it to the user, but don’t let the user be the one who tells you anything other than the weapon they’re using (and of course validate that they own the given weapon and have ammo for it) and the player they’re trying to shoot.
Reading “private” messages
This is where we move from forms to URLs. Reading a message would require a hit to a page like “/messages.php?id=xxx”, where xxx is the id of the email. Well, because Alphonso didn’t think users could modify the URL themselves, you could put in any id you wanted, and then read anybody’s email. Using this passive cheat, you could see what your enemies planned. Following up with a similar method on the message deletion URL, you could see your enemies’ plans but keep them from letting each other know! I was able to discover that my “enemies” thought I was an ex-player they had pissed off a while before I started playing. I catered to this fear and made up all kinds of interesting stories about revenge and such.
Simple fix here – if a user requests access to anything private, make sure they are authorized to see/edit that item!
Once I got bored of looking for “boring” exploits, I decided to check out XSS possibilities. I’m not a security expert, so I only knew how to do something similar to what the wikipedia article calls a “type-2 attack”. And I wasn’t interested in stealing these people’s accounts or anything, I just wanted to mess with their game.
When sending a message, I found that I could embed any HTML I wanted. So with very little effort, I made the private message receipt form appear to have a button on it that looked like the usual “Delete” button, and made the rest of the real page end up hidden so that the only button on the form ended up being mine. When my delete button was clicked, it actually took the user to the “Shoot a player” page, with one of my enemies as the target.
After some testing with a friend, I discovered that I could make a user run literally any action in the game, from failing a big job (giving them jailtime), shooting their own don, going into hiding (forcing them to log out for 8 hours of real time, unable to perform any actions), etc. Had I been evil enough I could have logged out all the players who disliked me except for one, and systematically killed them one at a time.
With a little more tweaking, I found that I could use AJAX to actually make the person perform these actions without even clicking a button. The incoming message could be as simple as “You suck!”, and by merely viewing it, the player committed to the action(s) of my choosing.
Email to Alphonso
I wrote Alphonso an in-game email asking if he was aware of cheating issues. I figured he’d deny it like so many web app designers who don’t know security. He surprised me:
yes I am aware of it and thank you very much for assisting me in this game: I have other areas that I am repairing first and I will be getting there soon. Please continue to inform me of areas that you find.
At this point I felt pretty bad and told him the truth – I’d been exploiting the game from day 1, and I pointed out all the areas I thought he needed to look into.
Read my response here if you’re curious how much of a dick I can be when I’ve hacked you black and blue.
Do not assume users won’t edit forms and submit bogus data. Do not let a user alter or view anything he doesn’t own (if he says he wants to view message id 10, make sure he is authorized to do so!). Cookies, URLs, and form fields are extremely easy to edit!
There is also the unmentioned SQL Injection attack. I can’t help much with these as I know very little about the attack, but this wikipedia article will give you a great deal of help. The most important thing here is that most database libraries have built-in features for keeping things at least moderately safe (bind variables, for instance, such as “SELECT * FROM FOO WHERE ID = ?”, where the library will make sure the variable that’s substituted for the ‘?’ is safe). USE THEM!
Web security is much more important than most programmers seem to realize. If you want a game or other app to get popular and last a long time, do not skimp on security. Or you, too, could end up with a good idea that does as well as Mobster World.
“PCI compliance”:http://en.wikipedia.org/wiki/PCI_DSS is a good idea. In theory. At my job we’re adopting all these standards to make all our users’ experiences better, which is really a great thing. But just like every other “good idea in theory”, this one is being abused in horribly stupid ways.
As a professional web programmer who actually cares about keeping my job, I do spend the time to learn little tidbits about security from time to time. And on our team, we have a pretty effective security specialist who makes sure things like “XSS”:http://en.wikipedia.org/wiki/Crosssitescripting and “SQL Injection”:http://en.wikipedia.org/wiki/SQL_injection aren’t going to bring down our rather important e-commerce sites. I’m not even half as knowledgeable as this guy, but I still consider myself a proficient web security person. So to me, being treated like I don’t even know the definition of “security” can be a bit frustrating.
Recently we had a required meeting for PCI compliance. It was not something anybody could get out of, not even our security specialist who regularly attends security conferences. Okay, right, people need to know about security issues. Fine, we’ll all go to Security 101 and be able to have a quick laugh, right?
Yeah, more like a long and somehow excruciatingly-painful laugh. We learned the following things, no joke: * Do not store passwords on a piece of paper under your desk. * Do not hold the locked doors open for people who clearly don’t belong in the building. * If you lose a laptop that contains customer data, such as credit card numbers, report it to a manager. (I don’t know what the other options for this situation could even BE… pretend you still have said laptop by building one out of cardboard boxes?) * If you see a total stranger sitting at somebody’s desk whom you know is not that total stranger, you should report it.
There were other points to learn. Something about “report any anomaly that isn’t normally there” (isn’t that kind of the definition of anomaly?), though that’s not stupid advice as much as a funny way to word the stupid advice.
Then there’s the “don’t ask don’t tell” security policies. One of the speakers talked about how he had to fire a guy who was using some cracking software to test the strength of user passwords. Because, you know, he was using EVIL HACKER SOFTWARE, by golly! The speaker actually said, “He was using software to test the strength of passwords, and while he claimed it was a security test, that’s something hackers do.” Don’t get me wrong, maybe the dude was a malicious hacker (yes, not all “hackers” are malicious), but I’d have liked to hear why our illustrious consultant friend was so sure of this guy’s evil-doing ways…. I had a recent experience that was similar, so the subject is a bit of a sore spot. There was no firing, but I was “talked to” by an exec for having tested and then written up a report to my managers when I discovered some security problems. I guess I made the mistake of actually verifying that my hunch was correct – verification required me to H4X0R other people’s accounts (with their consent, mind you).
Back on topic… so not only was this class totally below everybody in my department, but the only lesson I learned was that you never, ever, point out security flaws that look too technical in nature, otherwise you’re a suspected hacker. Awesome message, PCI consultants! I salute you!
I guess what I’m saying here is that PCI compliance is a great thing when it comes to the big picture – store credit card data safely, don’t store the CVC data at all (the little 3-5 digit number on the back of your card), never send unencrypted customer data anywhere, etc. But once you bring consultants into the mix, every good idea turns to shit.
I’m starting to think that lawyers spawned technology consultants….
On the heels of my amazing discovery of the “PC Mesh Hide Files and Folders“:http://blog.nerdbucket.com/articles/2007/01/15/revolutionary-new-software software, I make yet another Awesome Software Discovery: “jcap”:http://www.archreality.com/jcap/!
Spammers, beware! As long as we have people like Arch Reality working on our side, your days are numbered!
…or are they?
This one blows my mind. PC Mesh has a pretty crappy concept, but these folks really take the cake! Arch Reality’s only saving grace is the disclaimer that came over a year after jcap’s release:
***NOTICE (01.10.2006): The developer assumes no liability with this resource and it is provided as is. This script is referred to as a “security development” because it can provide some minimal level of security. While it does seem to be an effective elementary form of security the developer does not claim that it is an impenetrable solution and thus the developer does not recommend implementing it for the protection of highly sensitive data.
I’ve done a small amount of digging, and sadly there are people out there who use this product, and think it provides some measure of security. This kind of ignorance is so easily avoided if the people who write software would spend the half hour to research the actual problem they’re trying to solve.
If I can reach just one person, and that person keeps from hiring these horribly untalented hacks, I’ll feel this blog post was more than worthwhile.
There is a company out on the fringes of technology. Making software that most of us only dream of being able to write. Scoffing at the current obsolete methodologies and practices, these brave new developers have recently pioneered an awesome new era in software development.
This software company is clearly just another one of your typical geniuses not recognized in their time, as the very unscrupulous “CNet / Downloads.com”:http://www.download.com/PC-Mesh/3260-20_4-6263078.html reviews have been far too harsh on this enterprising company.
“PC Mesh”:http://www.pcmesh.com is, of course, the company to which I refer. It is with the most sincere amazement I discovered this little gem of a company today. Or more specifically, the discovery was their “Hide Files and Folders”:http://www.pcmesh.com/hide-files-folders.htm software.
How can I make these claims about this company? Well, for starters, their web site tells us all we need to know: PCMesh Hide Files and Folders is a revolutionary new software product…. But I’m not an idiot – I know to do my homework and not take everything at face value, even a statement so indisputable as that one. So how do we know these guys are the real deal? I’ll go through their feature list, item by item, and explain just how brilliant they are. Some of what you’re about to read may be difficult to accept, but keep in mind that true brilliance will often challenge that which we have been taught to believe, and that challenge can sometimes be difficult to accept. Now, on with the -propaganda bashing- product highlights: * Invisible from the operating system, invisible from virus attack and invisible from spying eyes that won’t even know the cloaked files or directories are present. ** Wow. Just… wow. Okay, invisible files are protected from virus attack and spies. Humans won’t know to look for cloaking software, of course, because this concept is totally new and unique, and even now that it’s out, unauthorized people would never dream of doing research to learn of this exciting new software. As for viruses, yeah, they can’t infect what they can’t find. Too bad most people want to hide data files, not fracking executables. And too bad that when you make those files visible, a virus will then see the OS reading the files and infect them. And too bad a smart virus could easily be written to infect this POS program in such a way as to destroy the data as you try and make it visible. But other than that, yeah, this software is amazingly effective. * Encrypted files are still visible on the hard drive. This makes them vulnerable to attack from anyone who is interested enough in the content of the files to spend time trying to decipher them. And with more and more hackers intent on defeating modern encryption algorithms, a need exists for a better type of protection. ** In fact, this may be the only statement that’s partially true. Granted, most encryption today is nearly unbreakable, especially for home computers that don’t house highly-sensitive data, but otherwise this isn’t a “bad” thing to say. Better encryption standards are always a good thing. Questioning the strength of today’s encryption is certainly a worthy goal. Course, I’m not sure where they got the idea that “more and more hackers” are are intent on defeating modern encryption algorithms. Haven’t droves of hackers (and security specialists and general security enthusiasts) always been interested in defeating encryption algorithms? Without those people, we’d all still be using Caeser Shift Ciphers! * In addition to rapidly becoming obsolete, current encryption programs are slow. ** Rapidly becoming obsolete? Gosh, even the encryption algorithms that are considered to be broken are still pretty strong for the average computer user’s needs. And anybody with data so sensitive that it needs unbreakable encryption can probably deal with the fact that they need to update encryption methods every few years. * It takes as long as 10 minutes per 200 MB to encrypt or decrypt a file, while PCMesh Hide Files and Folders executes instantly regardless of the file size or number of files/folders being protected. Just one click is all it takes to render any file or directory invisible. ** Okay, I don’t know much about encryption speeds. I have to be honest, this could be completely true for a really awesome encryption algorithm. So let’s say they have two semi-true statements. Let’s note here that this software “executes instantly”, which means to me it flags files in some way (prefixing them with $sys$, perhaps?), and doesn’t do any kind of encryption. * Data that’s protected by PCMesh Hide Files and Folders is not visible, so it can’t be attacked. In fact, the software itself does not even run continually, so it does not announce its presence to snoopers and hackers. The only time the software is active is when it’s being used to hide or reveal protected files or directories. ** This statement (or series of statements) is so ridiculous I am amazed. “Security through obscurity”:http://en.wikipedia.org/wiki/Securitythroughobscurity is just plain stupid. If an attacker simply finds out about this garbage software, they’ll know to attack the “invisible” files. And since the files are still on the system, there is no way to truly make them hidden – if this software can get to them, so can an attacker. Worse, the authors actually believed that an attacker will need something continually running in order to realize what’s going on. I imagine PCMesh is populated by people who’ve never even read a single article on security, encryption, or hacking. If hackers had no access to the internet and didn’t know how to research new “protection” schemes, they really wouldn’t ever be a threat. * Better Than Encryption ** Though this is higher on the page than the last few items (its their header in fact), I thought I’d mention it here just after the security bit, to point out how absurd the claim is. Obscurity is /never/ better than encryption for sensitive files. It’s only better than encryption when it comes to files you don’t need strong protection on, and situations where you just want to keep the honest people honest. Nobody can currently break AES, but just spending some time hacking through this product’s disassembly (after unpacking their undoubtedly “proprietary” packed executable) would probably reveal how to find the “hidden” files. Though I suspect it’ll turn out to be similar to the “Sony rootkit”:http://en.wikipedia.org/wiki/2005SonyBMGCDcopyprotectionscandal BS from 2005. * Hide files or folders of any size instantly. There is no processing time. ** Most of the bullet point “highlights” are just repetitive crap from the “Better Than Encryption” section of this website. But this one struck me as funny. It’s really no big deal; it’s just that with computers (and pretty much anything), there’s no such thing as “no processing time”. I dunno, I’m picky, shut up.
So clearly this product kicks ASS. Go out and buy it today.