PasteBins – Usenet 2.0
Here I am again with a crazy project of mine: HaxTools.
Ok, the name wasn’t particularly well thought but I think you will find the idea quite well engineered.
On a little background note, let me introduce you to Usenet. Usenet started off in 1979 as a forum-like service. It was a discussion board were allowed to discuss and share opinions, very much like forums as we know them nowadays. Instead of using your webclient, you would have to use an e-mail client with support for NNTP (the newsgroups protocol) or an actual application dedicated to newsgroups, this means that this service, unlike forums, doesn’t work over HTTP but over a different protocol, NNTP.
What happened after was that people realized the potential of the newsgroup servers for other purposes, namely file distributing based on a centralized server (as opposed to peer-to-peer). Such newsgroup servers are most commonly provided over very powerful Internet backbones which makes them great platforms for distributing all kinds of binary files to several people just at the cost of one single upload.
And so it started happening. Programs were developed that were able to post binary files on newsgroups, and newsgroup clients that were able to download binary files also started showing up. From posting innocent files to using Usenet as a gateway for piracy was quite a small leap and nowadays one could say that the Usenet is perhaps most used for illegal content than for actual discussion of ideas.
Now, back to my application. HaxTools is a proof of concept set of tools; therefore, I won’t be liable for any damage it causes to your computer or to whatever uses people give it. In principle, the program does not violate any terms of service, although it is likely that the service it uses currently (http://rafb.net/paste/) will get up measures to fight it.
My application makes use of PasteBin sites (like the RAFB.net) to store files online. The HaxUploader allows you to choose the granularity of the upload up to 500Kb (meaning that your file will be sliced in 500Kb pieces) and allows files of up to 10Mb – this limit is in place so that your computer doesn’t crash. This is just because I had no concern with optimization, so all operations are done in memory, which can be cumbersome on the computer for big files. However, optimizations can indeed be made so that all operations are done out of the hard drive, making the process slower but workable for bigger files. Depending on the success of this proof of concept, I may get into optimizing the idea.
How does it work from a user’s perspective?
Nothing simpler. The user chooses a file on HaxUploader, presses upload and waits until the green bar gets filled completely and links show up on the box below. When the links are there, you just have to copy the said links (don’t ever change the order!) and distribute them to other people.
From the receiver’s perspective, they will open HaxDownloader, paste in the links on the SAME order as they have received them and press Download. After a while, a message will appear saying that the process was completed; at this point, on the same folder as HaxDownloader was ran, there will be a NewFile.bin – this is the standard name for all downloaded files. Afterwards you’ll have to rename it to whatever you like and also keep in mind that you do have to change the extension of the file to what it was originally – this is also something that could be done better in an actual program.
After that is done, your file is ready to use. Whether it is a RAR file, or an MP3… whatever, it will work with all file types and we’ll now get to how it works from the code’s perspective.
What’s the whole idea? In a nutshell, the HaxUploader grabs the file it is meant to upload, streams it into memory (treating it as a binary file, thus working for any kind of files) and then moves on to convert the file contents to Base64. This is the standard format on which files are stored on Usenet; so the core idea is kept. With the contents on Base64, it goes on to split them into smaller chunks (given by the granularity setting) and uploads each of the chunks into a separate ‘pasting’.
When it has finally submitted all the parts and acquired all the links from the ‘pastings’ we’re all done and it simply outputs the links to the user.
The order in which the links are arranged does matter and it is imperative that they are kept in the order they are provided to you, otherwise it simply won’t work as I haven’t put in place any order measure. (It can be done though and on an actual application this would be needed).
The Downloader program simply fetches the content of the paste links, puts it all back together and decodes the Base64 string into binary, recreating the file.
As a final note, there isn’t a big deal of protections and validations in place, that is why this is a proof of concept – I just wanted to prove that it can be done.
For now I am not releasing the source code but just the binaries. If you are interested in the source code, you can contact me at tiago[at]espinhas[dot]net and we can discuss the situation.
Until the next time!
Comments