Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!
  • Your Cart Is Empty!
Your address will show here +12 34 56 78
Bitcoin, Blockchain, Blockchain technology, Encryption
The internet has become a staple of modern life. We use it to shop for what we need and want, talk to friends and family, run businesses, meet new people, watch movies and TV, and pretty much everything else you can think of. In short, it has given birth to a new age in human history.
The last example of this type of widespread change was the industrial revolution. But unlike the digital revolution, which took place over less than a half a century, the transition to industrialized societies took hundreds of years. However, this rapid change is just further proof of how much the internet is reshaping the way we live.
The internet started in the 1950s as a small, government-funded project. But have you ever wondered how these humble beginnings led to worldwide connectivity?
If you have, read on for a detailed summary of the history of the internet.

Internet Statistics in 2019

Timeline of the Internet

The invention of the internet took nearly 50 years and the hard work of countless individuals. Here’s a snapshot of how we got to where we are today:

Part 1: The Early Years of the Internet

When most of us think of the early years of the internet, we tend to think of the 1990s. But this period was when the internet went mainstream, not when it was invented. In reality, the internet had been in development since the 1950s, although its early form was a mere shell of what it would eventually become.

Wide Area Networking and ARPA (1950s and 1960s)

For the internet to become popular, we first needed computers, and while the first computers date back to the 17th and even the 16th century, the first digital, programmable computers broke onto the scene in the 1940s. Throughout the 1950s, computer scientists began connecting computers in the same building, giving birth to Local Area Networks (LANs.), and instilling people with the idea that would later morph into the internet.

In 1958, the United States Department of Defense Secretary Neil McElroy signed Department of Defense Directive 5105.15 to create the Advanced Research Projects Agency (ARPA), which, due to the tensions produced during the Cold War, was tasked with creating a system of long-distance communications that did not rely on telephone lines and wires, which were susceptible to attack.

However, it wasn’t until 1962 that J.C.R. Licklidler, an MIT scientist and ARPA employee, and Welden Clark published their paper “On-line man-computer communication.” This paper, which was really a series of memos, introduced the “Galactic Network” concept, which was the idea that there could be a network of connected computers that would allow people to access information from anywhere at anytime. Eventually, the idea of a “galactic network” became known as a Wide Area Network, and the race to create this network became the race to create the internet.

Because of how closely this idea resembles the internet today, some have chosen to name Licklidler as the “father of the internet,” although the actual creation and implementation of this network resulted from the hard work of many hundreds if not thousands of people.

The First Networks and Packet Switching (1960s)

To build the internet, researchers were working on ways to connect computers and also make them communicate with one another, and in 1965, MIT researcher Lawrence Roberts and Thomas Merrill connected a computer in Massachusetts to one in California using a low-speed dial-up telephone line. This connection is credited as being the first-ever Wide Area Network (WAN). However, while the two men were able to make the computers talk to one another, it was immediately obvious that the telephone system used at the time was not capable of reliably handling communications between two computers, confirming the need to develop a technology known as packet switching to facilitate a faster and more reliable transmission of data.

In 1966, Roberts was hired by Robert Taylor, the new head of ARPA (which had been renamed DARPA), to realize Licklider’s vision of creating a “galactic network.” By 1969, the early framework of the network, named ARPAnet, had been built, and researchers were able to link one computer in Stanford and one in UCLA and communicate using packet switching, although messaging was primitive. Shortly thereafter, also in 1969, computers at the University of Utah and the University of California, Santa Barbara were added to the network. Over time, the ARPAnet would grow, and it served as the foundation for the internet we have today.

However, there were other versions, such as the Merit Network from the University of Michigan and the Robert CYCLADES network, which was developed in France. Also, Donald Davies and Roger Scantlebury of the National Physics Laboratory (NPL) in the United Kingdom were developing a similar network based on packet switching, and there were countless other versions of the internet in development in various research labs around the world. In the end, the combined work of these researchers helped produce the first versions of the internet.

Internet Protocol Suite (1970s)

Throughout the rest of the 1960s and into the early 1970s, different academic communities and research disciplines, desiring to have better communication amongst their members, developed their own computer networks. This meant the internet was not only growing, but that there were also countless versions of the internet that existed independently of one another.

Seeing the potential of having so many different computers connected over one network, researchers, specifically Robert Kahn from DARPA and Vinton Cerf from Stanford University, began to look at a way to connect the various networks, and what they came up with is the Internet Protocol Suite, which is made up of the Transmission Control Protocol and the Internet Protocol, also known as TCP/IP. The introduction of this concept was the first time the word “internet” was used. It was shorthand for the word “internetworking,” which reflects the internet’s initial purpose: to connect multiple computer networks.

The main function TCP/IP was to shift the responsibility of reliability away from the network and towards the host by using a common protocol. This means that any machine could communicate with any other machine regardless of which network it belonged to. This made it possible for many more machines to connect with one another, allowing for the growth of networks which much more closely resemble the internet we have today. By 1983, TCP/IP became the standard protocol for the ARPAnet, entrenching this technology into the way the internet works. However, from that point on the ARPAnet became less and less significant until it was officially decommissioned in 1990.

Part 2: The Internet Goes Mainstream

By the middle of the 1980s, the growth of the internet combined with the introduction of TCP/IP meant the technology was on the brink of going mainstream. However, for this to happen, massive coordination was needed to ensure the many different parties working to develop the internet were on the same page and working towards the same goal.

The first step in this process was to turn the responsibility of managing the development of the internet over to a different government agency. In the U.S., NASA, the National Science Foundation (NSF), and the Department of Energy (DOE) all took on important roles in the development of the internet. By 1986, the NSF created NSFNET, which served as the backbone for a TCP/IP based computer network.

This backbone was designed to connect the various supercomputers across the United States and to support the internet needs of the higher education community. Furthermore, the internet was spreading around the world, with networks using TCP/IP across Europe, Australia, and Asia. However, at this point, the internet was only available to a small community of users, mainly those in the government and academic research community. But the value of the internet was too great, and this exclusivity was set to change.

Internet Service Providers – ISPs (Late 1980s)

By the late 1980s, several private computer networks had emerged for commercial purposes that mainly provided electronic mail services, which, at the time, were the primary appeal of the internet. The first commercial ISP in the United States was The World, which launched in 1989.

Then, in 1992, U.S. Congress passed expanding access to the NSFNET, making it significantly easier for commercial networks to connect with those already in use by the government and academic community. This caused the NSFNET to be replaced as the primary backbone of the internet. Instead, commercial access points and exchanges became the key components of the now near-global internet infrastructure.

The World Wide Web and Browsers (Late 1980s-early 1990s)

The internet took a big step towards mainstream adoption in 1989 when Tim Berners-Lee from the European Organization for Nuclear Research (CERN) invented the World Wide Web, also known as “www,” or, “the web.” In the World Wide Web, documents are stored on web servers and identified by URLs, which are connected by hypertext links, and accessed via a web browser. Berners-Lee also invented the first Web Browser, called WorldWideWeb, and many others emerged shortly thereafter, the most famous being Mosaic, which launched in 1993 and later became Netscape.

The release of the Mosaic browser in 1993 caused a major spike in the number of internet users, largely because it allowed people to access the internet from their normal home or office computers, which were also becoming mainstream around this time. In 1994, the founder of Mosaic launched Netscape Navigator, which, along with Microsoft Internet Explorer, was the first truly mainstream web browser.

The subsequent Browser Wars, which resulted in the failure of Netscape and the triumph of Microsoft, made Netscape one of the many early internet players to rise quickly and fall just as fast. Many use this story to demonstrate the ruthlessness of Bill Gates’ business practices, but no matter what you think of the guy, this “war” between Netscape and Microsoft helped shape the early days of the internet.

Apart from making it easier for anyone to access the internet from any machine, another reason browsers and the World Wide Web were so important to the growth of the internet was that they allowed for the transfer of not only text but also images. This increased the appeal of the internet to the average person, leading to its rapid growth.

Part 3: The Internet Takes Over

By the middle of the 1990s, the Internet Age had officially begun, and since then, the internet has grown both in terms of the number of users but also in the way it affects society. However, the internet as we know it today is still radically different than the internet that first went mainstream in the years leading up to the turn of the millennium.

Growth of the Internet and the Digital Divide

All restrictions to commercial use of the internet were lifted in 1995, and this led to a rapid growth in the number of users worldwide. More specifically, in 1995, there were some 16 million people connected to the internet. By 2000, there were around 300 million, and by 2005, there were more than a billion. Today, there are some 3.4 billion users across the world.

However, most of this growth has taken place in North America, Europe, and East Asia. The internet has yet to reach large portions of Latin America and the Caribbean, the Middle East and North Africa, as well as Sub-Saharan Africa, largely due to economic and infrastructure challenges. This has left many with the fear that the internet will exacerbate inequalities around the world as opportunities provided to some are denied to others based on access to the web.

But the other side of the coin is that these regions are poised to experience rapid growth. East Asia had relatively few internet users in 2000, but that region now represents the majority of internet users in the world, although much of this is due to the rapid industrialization of China and the growth of its middle class.

The Internet Gets Faster

In its early years, computers required connection to a phone line to access the internet. This connection type was slow and it also created problems, the most famous being that it limited the number of people who could access the internet from a particular connection (Who doesn’t remember getting kicked off the internet when their mom or dad signed on or picked up the phone?)

As a result, shortly after the internet went mainstream, the public began demanding faster internet connections capable of transmitting more data. The response was broadband internet, which made use of cable and Direct Service Line (DSL) connections, and it rapidly became the norm. By 2004, half the world’s internet users had access to a high-speed connection. Today, the vast majority of internet users have a broadband internet connection, although some 3 percent of American’s still use a dial-up internet connection.

Web 2.0

Another big driver of the growth of the web was the introduction of the concept known as “Web 2.0.” This describes a version of the web in which individuals play a more active role in the creation and distribution of web content, something we now refer to as social media.

However, there is some debate as to whether or not Web 2.0 is truly different from the original concept of the web. After all, social media grew up alongside the internet – the first social media site, Six Degrees, was launched in 1997. But no matter which side of the debate you fall on, there’s no doubt that the rise of social media sites such as MySpace and Facebook helped turn the internet into the cultural pillar that it has become.

The Mobile Internet

Perhaps the biggest reason the internet has become what it is today is the growth of mobile technology. Early cell phones allowed people to access the internet, but it was slow and modified. The Apple iPhone, which was released in 2007, gave people the first mobile browsing experience that resembled that which they got on a computer, and 3G wireless networks were fast enough to allow for email and web browsing.

Furthermore, WiFi technology, which was invented in 1997, steadily improved throughout the 2000s, making it easier for more and more devices to connect to the internet without needing to plug in a cable, helping make the internet even more mainstream.

WiFi can now be found almost anywhere, and 4G wireless networks connect people to the mobile internet with speeds that rival those of traditional internet connections, making it possible for people to access the internet whenever and wherever they want. Soon, we will be using 5G networks, which allow for even faster speeds and lower latency. But perhaps more importantly, 5G will make it possible for more devices to connect to the network, meaning more smart devices and a much broader understanding of the internet.

Part 4: The Future of the Internet

While the concept of the internet dates back to the 1950s, it didn’t become mainstream until the 1990s. But since then, it has become an integral part of our lives and has rewritten the course of human history. So, after all this rapid growth, what’s next?

Continued Growth

For many, the next chapter of the history of the internet will be defined by global growth. As economies around the world continue to expand, it’s expected that internet use will as well. This should cause the total number of internet users around the world to continue to grow, limited only by the development of infrastructure, as well as government policy.

Net Neutrality

One such government policy that could dramatically impact the role of the internet in our lives is that of net neutrality. Designed to keep the internet a fair place where information is freely exchanged, net neutrality prohibits ISPs from offering preferred access to sites who choose to pay for it. The argument against net neutrality is that some sites, such as YouTube and Netflix, use considerably more bandwidth than others, and ISPs believe they should have the right to charge for this increased use.

However, proponents of net neutrality argue this type of structure would allow large companies and organizations to pay their way to the top, reducing the equality of the internet. In the United States, net neutrality was established by the FCC in 2015, under the Obama administration, but in 2018, this policy was repealed. At the moment, nothing significant has changed, but only time will tell how this shift in policy will affect the internet.


Another issue that could possibly affect the internet moving forward is the issue of censorship. Internet use around the world is often restricted, most famously in China, as a means of restricting the information available to people. In other parts of the world, specifically in the U.S, and Europe, these policies have not been enacted. However, in the era of fake news and social media, some companies, most notably Facebook, are taking action to slightly limit what people can say on the internet. In general, this is an attempt to limit the spread of hate speech and other harmful communications, but this is a gray area that has defined free speech debates for most of history and that will continue to be at the center of debates about the internet for years to come.


The internet has helped usher in a new age in human history, and we are just now beginning to understand how it will impact the way we live our lives. The fact that this tremendous cultural revolution has taken place in less than half a century speaks to the rapid nature of change in our modern world, and it serves as a reminder that change will continue to accelerate as we move into the future. You can read the full article on


Bitcoin, Blockchain, Blockchain technology, Encryption
Edward Snowden’s NSA spying revelations highlighted just how much we have sacrificed to the gods of technology and convenience something we used to take for granted, and once considered a basic human right – our privacy. It is just not just the NSA. Governments the world over are racing to introduce legislation that allows to them to monitor and store every email, phone call and Instant Message, every web page visited, and every VoIP conversation made by every single one of their citizens. The press has bandied parallels with George Orwell’s dystopian world ruled by an all-seeing Big Brother about a great deal. They are depressingly accurate. Encryption provides a highly effective way to protect your internet behavior, communications, and data. The main problem with using encryption is that its use flags you up to organizations such as the NSA for closer scrutiny. Details of the NSA’s data collection rules are here. What it boils down to is that the NSA examines data from US citizens, then discards it if it’s found to be uninteresting. Encrypted data, on the other hand, is stored indefinitely until the NSA can decrypt it. The NSA can keep all data relating to non-US citizens indefinitely, but practicality suggests that encrypted data gets special attention. If a lot more people start to use encryption, then encrypted data will stand out less, and surveillance organizations’ job of invading everyone’s privacy will be much harder. Remember – anonymity is not a crime!

How Secure is Encryption?

Following revelations about the scale of the NSA’s deliberate assault on global encryption standards, confidence in encryption has taken a big dent. So let’s examine the current state of play…

Encryption Key Length

Encryption Key 01Key length is the crudest way of determining how long a cipher will take to break. It is the raw number of ones and zeros used in a cipher. Similarly, the crudest form of attack on a cipher is known as a brute force attack (or exhaustive key search). This involves trying every possible combination to find the correct one. If anyone is capable of breaking modern encryption ciphers it is the NSA, but to do so is a considerable challenge. For a brute force attack:
  • A 128-bit key cipher has 3.4 x10(38) possible keys. Going through each of them would thousands of operations or more to break.
  • In 2011 the fastest supercomputer in the word (the Fujitsu K computer located in Kobe, Japan) was capable of an Rmax peak speed of 10.51 petaflops. Based on this figure, it would take Fujitsu K 1.02 x 10(18) (around 1 billion) years to crack a 128-bit AES key by force.
  • In 2016 the most powerful supercomputer in the world is the NUDT Tianhe-2in Guangzhou, China. Almost 3 times as fast as the Fujitsu K, at 33.86 petaflops, it would “only” take it around a third of a billion years to crack a 128-bit AES key. That’s still a long time, and is the figure for breaking just one key.
  • A 256-bit key would require 2(128) times more computational power to break than a 128-bit one.
  • The number of years required to brute force a 256-bit cipher is 3.31 x 10(56) – which is about 20000….0000 (total 46 zeros) times the age of Universe (13.5 billion or 1.35 x 10(10) years!
The NUDT Tianhe-2 supercomputer in Guangzhou, China

128-bit Encryption

Until the Edward Snowden revelations, people assumed that 128-bit encryption was in practice uncrackable through brute force. They believed it would be so for around another 100 years (taking Moore’s Law into account). In theory, this still holds true. However, the scale of resources that the NSA seems willing to throw at cracking encryption has shaken many experts’ faith in these predictions. Consequently, system administrators the world over are scrambling to upgrade cipher key lengths. If and when quantum computing becomes available, all bets will be off. Quantum computers will be exponentially more powerful than any existing computer, and will make all current encryption ciphers and suites redundant overnight. In theory, the development of quantum encryption will counter this problem. However, access to quantum computers will initially be the preserve of the most powerful and wealthy governments and corporations only. It is not in the interests of such organizations to democratize encryption. For the time being, however, strong encryption is your friend. Note that the US government uses 256-bit encryption to protect ‘sensitive’ data and 128-bit for ‘routine’ encryption needs. However, the cipher it uses is AES. As I discuss below, this is not without problems.


Encryption key length refers to the amount of raw numbers involved. Ciphers are the mathematics used to perform the encryption. It is weaknesses in thesealgorithms, rather than in the key length, that often leads to encryption breaking. By far the most common ciphers that you will likely encounter are those OpenVPN uses: Blowfish and AES. In addition to this, RSA is used to encrypt and decrypt a cipher’s keys. SHA-1 or SHA-2 are used as hash functions to authenticate the data. AES is generally considered the most secure cipher for VPN use (and in general). Its adoption by the US government has increased its perceived reliability, and consequently its popularity. However, there is reason to believe this trust may be misplaced.


The United States National Institute of Standards and Technology (NIST) developed and/or certified AES, RSA, SHA-1 and SHA-2. NIST works closely with the NSA in the development of its ciphers. Given the NSA’s systematic efforts to weaken or build backdoors into international encryption standards, there is every reason to question the integrity of NIST algorithms. NIST has been quick to deny any wrongdoing (“NIST would not deliberately weaken a cryptographic standard”). It has also has invited public participation in a number of upcoming proposed encryption-related standards in a move designed to bolsterpublic confidence. The New York Times, however, has accused the NSA of introducing undetectable backdoors, or subverting the public development process to weaken the algorithms, thus circumventing NIST-approved encryption standards. News that a NIST-certified cryptographic standard – the Dual Elliptic Curve algorithm (Dual_EC_DRGB) had been deliberately weakened not just once, but twice, by the NSA destroyed pretty much any existing trust. Encryption That there might be a deliberate backdoor in Dual_EC_DRGB had already been noticed before. In 2006 researchers at the Eindhoven University of Technology in the Netherlands noted that an attack against it was easy enough to launch on ‘an ordinary PC.’  Microsoft engineers also flagged up a suspected backdoor in the algorithm. Despite these concerns, where NIST leads, industry follows. Microsoft, Cisco, Symantec and RSA all include the algorithm in their products’ cryptographic libraries. This is in large partbecause compliance with NIST standards is a prerequisite to obtaining US government contracts. NIST-certified cryptographic standards are pretty much ubiquitous worldwide throughout all areas of industry and business that rely on privacy (including the VPN industry). This is all rather chilling. Perhaps because so much relies on these standards, cryptography experts have been unwilling to face up to the problem.

Perfect Forward Secrecy

Perfect Forward Secrecy 01 One of the revelations in the information provided by Edward Snowden is that “another program, code-named Cheesy Name, was aimed at singling out SSL/TLS encryption keys, known as ‘certificates,’ that might be vulnerable to being cracked by GCHQ supercomputers.” That these certificates can be “singled out” strongly suggests that 1024-bit RSA encryption (commonly used to protect the certificate keys) is weaker than previously thought. The NSA and GCHQ could therefore decrypt it much more quickly than expected. In addition to this, the SHA-1 algorithm widely used to authenticate SSL/TLS connections is fundamentally broken. In both cases, the industry is scrambling fix the weaknesses as fast as it can. It is doing this by moving onto RSA-2048+, Diffie-Hellman, or  Elliptic Curve Diffie-Hellman (ECDH) key exchanges and SHA-2+ hash authentication. What these issues (and the 2014 Heartbleed Bug fiasco) clearly highlight is the importance of using perfect forward secrecy (PFS) for all SSL/TLS connections. This is a system whereby a new and unique (with no additional keys derived from it) private encryption key is generated for each session. For this reason, it is also known as an ephemeral key exchange. Using PFS, if one SSL key is compromised, this does not matter very much because new keys are generated for each connection. They are also often refreshed during connections. To meaningfully access communications these new keys would also need to be compromised. This makes the task so arduous as to be effectively impossible. Unfortunately, it is common practice (because it’s easy) for companies to use just one private encryption key. If this key is compromised, then the attacker can access all communications encrypted with it.

OpenVPN and PFS

The most widely used VPN protocol is OpenVPN. It is considered very secure. One of the reasons for this is because it allows the use of ephemeral keys. Sadly this is not implemented by many VPN providers. Without perfect forward secrecy, OpenVPN connections are not considered secure. It is also worth mentioning here that the HMAC SHA-1 hashes routinely used to authenticate OpenVPN connections are not a weakness. This is because HMAC SHA-1 is much less vulnerable to collision attacks than standard SHA-1 hashes. Mathematical proof of this is available in this paper.

The Takeaway – So, is Encryption Secure?

To underestimate the NSA’s ambition or ability to compromise all encryption is a mistake. However, encryption remains the best defense we have against it (and others like it). To the best of anyone’s knowledge, strong ciphers such as AES (despite misgivings about its NIST certification) and OpenVPN (with perfect forward secrecy) remain secure. As Bruce Schneier, encryption specialist, fellow at Harvard’s Berkman Center for Internet and Society, and privacy advocate famously stated,
Trust the math. Encryption is your friend. Use it well, and do your best to ensure that nothing can compromise it. That’s how you can remain secure even in the face of the NSA.”
Remember too that the NSA is not the only potential adversary. However, most criminals and even governments have nowhere near the NSA’s ability to circumvent encryption.

The Importance of End-to-end Encryption

End-to-end (e2e) encryption means that you encrypt data on your own device. Only you hold the encryption keys (unless you share them). Without these keys, an adversary will find it extremely difficult to decrypt your data. Encryption Many services and products do not use e2e encryption. Instead they encrypt your data and hold the keys for you. This can be very convenient, as it allows for easy recovery of lost passwords, syncing across devices, and so forth. It does mean, however, that these third parties could be compelled to hand over your encryption keys. A case in point is Microsoft. It encrypts all emails and files held in OneDrive (formerly SkyDrive), but it also holds the encryption keys. In 2013 it used these to unlock the emails and files of its 250 million worldwide users for inspection by the NSA. Strongly avoid services that encrypt your data on their servers, rather than you encrypting your own data on your own machine.


Although strong encryption has recently become trendy, websites have been using strong end-to-end encryption for the last 20 years. After all, if websites were not secure, then online shopping or banking wouldn’t be possible. The encryption protocol used for this is HTTPS, which stands for HTTP Secure (or HTTP over SSL/TLS). It is used for websites that need to secure users’ communications and is the backbone of internet security. When you visit a non-secure HTTP website, data is transferred unencrypted. This means anyone watching can see everything you do while visiting that site. This includes your transaction details when making payments. It is even possible to alter the data transferred between you and the web server. With HTTPS, a cryptographic key exchange occurs when you first connect to the website. All subsequent actions on the website are encrypted, and thus hidden from prying eyes. Anyone watching can see that you have visited a certain website, but cannot see which individual pages you read, or any data transferred. For example, the website is secured using HTTPS. Unless you are using a VPN while reading this web page, your ISP can see that you have visited, but cannot see that you are reading this particular article. HTTPS uses end-to-end encryption. Secured website Firefox It is easy to tell if you visit a website secured by HTTPS – just look for a locked padlock icon to the left of the main URL/search bar. There are issues relating to HTTPS, but in general it is secure. If it wasn’t, none of the billions of financial transactions and transfers of personal data that happen every day on the internet would be possible. The internet itself (and possibly the world economy!) would collapse overnight. For a detailed discussion on HTTPS, please see here.


An important limitation to encryption is that it does not necessarily protect users from the collection of metadata. Even if the contents of emails, voice conversations, or web browsing sessions cannot be readily listened in on, knowing when, where, from whom, to whom, and how regularly such communication takes place can tell an adversary a great deal. This is a powerful tool in the wrong hands. For example, even if you use a securely encrypted messaging service such as WhatsApp, Facebook will still be able to tell who you are messaging, how often you message, how long you usually chat for, and more. With such information, it would be easy to discover that you were having an affair, for example. Although the NSA does target individual communications, its primary concern is the collection of metadata. As NSA General Counsel Stewart Baker has openly acknowledged,
“Metadata absolutely tells you everything about somebody’s life. If you have enough metadata, you don’t really need content.
Technologies such as VPNs and Tor can make the collection of metadata very difficult. For example, an ISP cannot collect metadata relating to the browsing history of customers who use a VPN to hide their online activities. Note, though, that many VPN providers themselves log some metadata. This should be a consideration when choosing a service to protect your privacy. Please also note that mobile apps typically bypass any VPN that is running on your device, and connect directly to their publishers’ servers. Using a VPN, for example, will not prevent WhatsApp sending metadata to Facebook.

Identify Your Threat Model

When considering how to protect your privacy and stay secure on the internet, carefully consider who or what worries you most. Defending yourself against everything is almost impossible. And any attempt to do so will likely seriously degrade the usability (and your enjoyment) of the internet. Identifying to yourself that being caught downloading an illicit copy of Game of Thrones is a bigger worry than being targeted by a crack NSA TAO teamfor personalized surveillance is a good start. It will leave you less stressed, with a more useable internet and with more effective defenses against the threats that really matter to you. Of course, if your name is Edward Snowden, then TAO teams will be part of your threat model… I will discuss steps you should take to help identify your threat model in an upcoming article on In the meantime, this article does a good job of introducing the basics.

Use FOSS Software

Ultimate Privacy Guide Illustration 03 01The terrifying scale of the NSA’s attack on public cryptography, and its deliberate weakening of common international encryption standards, has demonstrated that no proprietary software can be trusted. Even software specifically designed with security in mind. The NSA has co-opted or coerced hundreds of technology companies into building backdoors into their programs, or otherwise weakening security in order to allow it access. US and UK companies are particularly suspect, although the reports make it clear that companies across the world have acceded to NSA demands. The problem with proprietary software is that the NSA can fairly easily approach and convince the sole developers and owners to play ball. In addition to this, their source code is kept secret. This makes it easy to add to or modify the code in dodgy ways without anyone noticing. Open source code The best answer to this problem is to use free open source software (FOSS). Often jointly developed by disparate and otherwise unconnected individuals, the source code is available to everyone to examine and peer-review. This minimizes the chances that someone has tampered with it. Ideally, this code should also be compatible with other implementations, in orderto minimize the possibility of a backdoor being built in. It is, of course, possible that NSA agents have infiltrated open source development groups and introduced malicious code without anyone’s knowledge. In addition, the sheer amount of code that many projects involve means that it is often impossible to fully peer-review all of it. Despite these potential pitfalls, FOSS remains the most reliable and least likely to be tampered with software available. If you truly care about privacy you should try to use it exclusively (up to and including using FOSS operating systems such as Linux).

Steps You Can Take to Improve Your Privacy

With the proviso that nothing is perfect, and if “they” really want to get you “they” probably can, there are steps you can take to improve your privacy.

Pay for Stuff Anonymously

One step to improving your privacy is to pay for things anonymously. When it comes to physical goods delivered to an actual address, this isn’t going to happen. Online services are a different kettle of fish, however. It is increasingly common to find services that accept payment through Bitcoin and the like. A few, such as VPN service Mullvad, will even accept cash sent anonymously by post.Ultimate Privacy Guide Illustration 04 01


Bitcoin is a decentralized and open source virtual currency that operates using peer-to-peer technology (much as BitTorrent and Skype do). The concept is particularly revolutionary and exciting because it does not require a middleman to work (for example a state-controlled bank). Whether or not Bitcoins represent a good investment opportunity remains hotly debated, and is not within the remit of this guide. It is also completely outside of my area of expertise! You can read the full article on