Nowadays, the Internet is all abuzz about cryptocurrencies and blockchain technologies. Although for me, it is way far behind the dot-com bubble phenomena in the latter part of the 90s and early 20s. But, then, it is still a tech bubble, a phantom in the business exchange world, if I may say. The attention and speculations the cryptocurrencies get show that the world of the cryptocurrency market only makes it more interesting.
The cryptocurrency system as a whole produces decentralized cryptocurrency. Such regularity defined when the time the system was created and which is publicly known. In centralized banking systems, governments or corporate boards control the supply of the currency. This is done by printing units of fiat money or demand additions to digital banking ledgers. In a decentralized cryptocurrency, governments or organizations cannot create new units. Nor provided support for other businesses, corporate entities or banks holding asset value measured in it. Want to know more about Blockchain and what it is all about, scroll to the bottom for an in-depth explanation of cryptocurrencies.
Best Cryptocurrencies to Invest In Right Now (a safe portfolio)
Below is my selection of best crypto coins to invest in right now, to create a safe cryptocurrency portfolio of Bitcoin and its Altcoins. This might not be the riskiest of choices with the most returns, but rest assured, if you have a little bit of these coins, you are going to be a wealthy man or woman in a few years from now. this is a long-term investment portfolio of crypto.
1. Bitcoin (BTC)
Bitcoin, the most popular cryptocurrency has at present $76.828 + Billion in market capitalization with bitcoin value of $4,469+ each and still rising as of this writing. The price of this cryptocurrencies has multiplied nearly 8x in the last year as of this writing.
2. Ethereum (ETH)
Ethereum is arguably the second-most popular cryptocurrency with a market capitalization of around $36,000 + Billion as of this writing. The price of this cryptocurrency has exploded to more than 3000% in the last year. But in spite of the growth, it still remains at less than 1/10th of Bitcoin.
3. Ripple (XRP)
Ripple is a payment protocol that allows instant transaction settlements and reduces transaction fees to cents. By using the power of blockchain, Ripple delivers one frictionless experience to send money globally. It is the quickest and very scalable digital asset, empowering real-time payments worldwide. Banks and payment providers can use Ripple XRP asset to further cut costs and access new markets. As of this writing, it has a market capitalization of around $7,821 + Billion.
4. Litecoin (LTC)
Litecoin is a peer-to-peer Internet digital currency acting in a complementary way to Bitcoin. Litecoin price rises more than 2000% in the last year. LTC enables instant, close-to-zero cost payments to anybody in the world. Litecoin offers quicker transaction verification times and value-added storage efficiency than the leading math-based digital currency. As of this writing, it has a market capitalization of around $3.449 + Billion.
5. Stellar (XLM)
The Stellar network is an open-source, distributed, and community-owned network used to facilitate cross-asset transfers of value. They help facilitate cross-asset transfer of value at a fraction of a penny while aiming to be an open financial system that gives people of all income levels access to low-cost financial services. However, unlike Ripple, Stellar.org is non-profit and its platform itself is open source and decentralized.
6. Binance Coin (BNB)
Binance Coin (BNB) is the cryptocurrency of the Binance platform. The name “Binance” is a combination of binary and finance. As of 2019, many businesses accept BNB as a form of payment. It has grown significantly over the past 2 years and is definitely a must-have in a well-diverse portfolio.
7. IOTA (MIOTA)
IOTA is an open-source distributed ledger protocol that goes beyond blockchain through its core invention of their so-called “block-less Tangle”. As of this writing, it has a market capitalization of around $2.465 + Billion. With no fees, IOTA enables companies to explore new business-2-business models by trading on an open market in real-time. Blockchains’ consensus is no-longer decoupled but instead an intrinsic part of the system, leading to the decentralized and self-regulating peer-to-peer network
8. Monero (XMR)
Monero is a secure, private, and untraceable cryptocurrency. And as of this writing, it has a market capitalization of about $2,095 + Billion.Monero is more than just technology. It’s also what the technology stands for. It also gives the full block reward to the miners, who are the most critical members of the network who provide this security. Transactions are cryptographically secure using the latest and most resilient encryption tools available.
9. Neo (NEO)
Formerly called Antshares, NEO is China’s first-ever open-source blockchain, claiming to be the Ethereum of China. It has a market capitalization of about $1.787+ Billion as of this writing. It is one of the top 10 cryptocurrencies in the market in terms of capitalization.
10. OmesiGo (OMG)
OmesiGo is a Fiat & crypto-friendly, cross-chain compatible, and Ethereum-powered with a market capitalization of around $1,075 + Billion as of this writing. OmiseGO has the potential to be a global standard for exchange and payments. OmiseGO is the answer to a fundamental coordination problem amongst payment processors, gateways, and financial institutions.
Introduction to Cryptocurrencies
Transactions with cryptocurrencies are generally irreversible once a number of blocks are confirmed. Furthermore, one feature that is lacking in cryptocurrency as compared to a credit card is consumer protection against fraud, such as chargebacks. However, this is an issue of little importance since a third party multi signature-based escrow can be used effectively to mediate a transaction. This process is equivalent to enabling chargebacks. Actually, this attribute is an advantage, since it is also much easier than performing an irreversible transaction using a system with a native chargeback.
What are cryptocurrencies?
Have you been, in one way or the other encountered such words as cryptocurrency trading, cryptocurrency mining, or bitcoin value? The possibility is 95% YES! But, what really is a cryptocurrency?
A cryptocurrency is a digitally designed asset functioning as a mode of exchange utilizing a form of writing called cryptography. It is primarily used as a way of securing transactions in businesses and control the formation of other currency units. They are classified as a subcategory of digital currencies and similarly classified as a subset of alternative and virtual currencies.
The best example is Bitcoin. Bitcoin became the world’s first decentralized cryptocurrency and digital payment system in 2009. This system works without a principal repository or single administrator, hence decentralized. The system is peer-to-peer where the transactions can take place directly between users and without an intermediary. Since then, many cryptocurrencies have been mushrooming all around. These are often called altcoins, a fusion of Bitcoin alternative.
As of the end of 2019, thousands of cryptocurrencies exist. Most of these cryptocurrencies are analogous to and a derivative from the original fully decentralized cryptocurrency, Bitcoin. In cryptocurrency, safety, integrity, and balance of ledgers are maintained by a community of distrustful parties called “crypto miners.” These are members of the general public using computers to help validate and timestamp transactions. These transactions are then added to the ledger in accordance with a specific timestamping system.
How do Cryptocurrencies work?
To fully understand how cryptocurrency works, you will need to understand a few basic specifics concepts, that are:
Publicly distributed ledger: A public ledger stored all transactions at every cryptocurrency’s formation. This is where the identities of the coin’s owners encrypted. To ensure the legitimacy of record-keeping, the system also uses other cryptographic techniques. The ledger makes sure that the matching “digital wallets” could compute an accurate spendable balance. Ethereum wallet and Bitcoin wallet or blockchain wallet is the world’s most popular digital wallets.
Peer-to-peer transactions: Any transfer of funds between digital wallets is called “transactions”. A transaction gets to be submitted to the public distributed ledger and waits for confirmation. Once the transaction is consummated, wallets use a cryptographic signature as proof that the transaction comes from the wallet’s owner. The confirmation process takes a little time, usually ten minutes for Bitcoin.
Cryptocurrency mining: In simple terms, this is the process of confirming transactions and adding the same to a public ledger. In order to add a particular transaction to a ledger, “miners” must figure out a progressively-complicated computational problem. This computational problem is somewhat like a mathematical puzzle. The “miners” who solves the puzzle first add a “block” of transactions to the public ledger. This is the system in which the transactions, blocks, and the public distributed ledger work together. This system ensures that not a single person could just simply add or modify a block at will.
The Technology Behind Cryptocurrencies: Blockchain
Bitcoin and with its spin-offs use decentralization of control as against the centralized electronic money/banking systems. Here, all transactions are confirmed and substantiated by network nodes. And recorded in a public distributed ledger technology called blockchain.
A group or individual identified as Satoshi Nakamoto developed the technical system from which decentralized cryptocurrencies are based. The first working version was open-source software. Akin to the internet in its infancy, blockchain technology is difficult to comprehend and predict. This is due to a lack of knowledge about how these cryptocurrencies work. But turn out to be omnipresent in the exchange of transactions of digital and physical goods, information, and online platforms. The term blockchain embodies an entirely whole new suite of technologies. Can, therefore, implemented in a lot of ways totally contingent on the objective.
A blockchain is a continuously growing list of records, simply called blocks. It is connected and secured using a writing form called cryptography or cryptology. A block consists of a typical hash pointer as a link to a previous block, a timestamp, and transaction data. On purpose, blockchains are inherently resistant to modification of the data. Blockchain can record transactions between two parties efficiently and permanently. A blockchain can collectively manage a peer-to-peer network, strictly adhering to a protocol for validating new blocks. A data in the one block cannot ex-post-facto modified.
Blockchain Technology Applications and Beyond
By design, blockchains are examples of a secure computing system with high Byzantine fault tolerance. Blockchains make a potentially apposite for events recording, medical recording, and other records management activities. Examples are identity management, documenting provenance, transaction processing, or food traceability.
The objective is to apply the technology to other transactions such as keeping track of property ownership. This could revolutionize how businesses and governments operate and citizens carry out their lives. At present, Bitcoin, the most popular cryptocurrency system carries out around two hundred thousand transactions per day. But for wider adoption, the system needs to be able to cope with many millions.
Synopsis in Cryptocurrency Trading
Below a dip in the water on trading cryptocurrencies, and the basics you should understand.
Why trade Cryptocurrencies today?
The field with cryptocurrencies is a fair playing level for retail traders such as us to profit.
Cryptocurrencies are growing and still a growing market.
This is a 24/7 market where, you, with the right strategies, can profit heavily from the regular volatility of the market.
Cryptocurrency trading is the Foreign Exchange (Forex) of cryptocurrencies. It means, traders are able to normally trade different bitcoin and altcoin for USD and BTC. Trading cryptocurrencies do not need mining hardware nor a bitcoin investment in HYIPs (High-Yield Investment Program) or cloud bitcoin mining, which always bears the risk in their integrity).
Why trade bitcoin or altcoin and not Forex? The reason is, it is easy to go into. In less than an hour, you can start trading bitcoin and earning money. And you should not forget that cryptocurrency trading is too easy to leave.
A huge advantage over Forex is the so-called “smaller spreads”. This is the difference between the asking price and the bid price of the market. For example: In the EUR/USD spread. The asking price and bid price are 1.0933 and 1.0931 respectively. The spread is only 0.0002. Percent-wise, it is just 18% (a spread of 0.0002/1.0933 = 0.018%). Now, for the spread in Bitcoin to USD. The asking price for one BTC to USD is $429 and a bid price is the US $428.999. It amounts to just US $0.001 or 0.0002% percent wise (0.001/429 = 0.0002%). This means that a smaller spread when you exchange, you will have made an almost zero loss while in Forex after your exchange, you are already at a loss of 0.018%, which is significant.
Margin Trading vs. Leverage Trading
Margin trading and leveraging are possible on some Forex and in addition on Cryptocurrency Exchanges. In Margin trading, peer-to-peer margin fund providers allowed you to use capital from. This means you can borrow buying/selling power, but needed to allocate funds (=margin). But not accessible until you return the lending capital. For example, you want to buy 2 BTC but you only have the US $429. And can done, possibly. But then again on a condition that you will have to pay some interest after you close your position.
Let’s say the BTC closes at the US $450. It will result to making $21 x 2 = $42 winnings. In this example, you only need to subtract a low interest of about 2%, and you have your final winnings. But that is of course if you have predicted the trade correctly. However, you could lose more, when you have a losing position. Or you may opt to use leverage trading on some Forex and Cryptocurrency Exchanges. In leverage trading, you can possibly trade an amount, which at present you do not have at your disposal. Normally, a cryptocurrency exchange offers a leverage ratio of 1:10. Meaning for each dollar in your possession you get $10 of buying power. To sum up, this means higher risk and possible higher profit.
Investing in Cryptocurrency
Cryptocurrency investing is different than investing in regular stock. Here’s why? When investing in a company, you are buying shares of that company and essentially own a percentage of that company. Of course, it totally depends on how big is your investments. When you invest in cryptocurrency, for example, Bitcoin or Ethereum, what you received is digital tokens that serve different purposes. Bitcoin can get you a partially anonymous decentralized cryptocurrency. In Ethereum, what you get is a bit of the ascendancy that runs smart contracts and decentralized apps.
However, there are many other cryptocurrencies (1,300+ as of the last count) and blockchain companies on the stock exchange where prospective investors can channel their money. Many bigger decentralized cryptocurrency exchanges such as GDAX, Bitfinex, Gemini, and Kraken usually proffer solid volume to trade cryptocurrencies via bank transfers or credit cards. Poloniex is another exchange offering more than 80 cryptocurrencies for trading, but the catch is you can just use Bitcoins or any other cryptocurrencies to support these trades. Coinbase is likewise a good choice that is emergent in popularity thanks to its ease-of-use and built-in wallet. However, the tradeoff here is relatively higher fees.
Below we list the 10+ best types of cryptocurrency to invest in now [Q4-2019] based on Market Capitalizations. We hope that this list will further inflame the enthusiasm and understanding of the people interested in cryptocurrency.
Last piece of advice
Many cryptocurrencies are mostly duplicates of existing cryptocurrencies circulating in the market with just very minor changes to it. Also, there are so many ways to forever lose cryptocurrency from local storage precisely to malware or data loss. This can happen due to the loss of the physical media which will effectively remove a lost cryptocurrency from the markets forever.
Banning these cryptocurrencies is not the answer or quick fix to ending money laundering and illicit trades, just as banning cash is not an answer to these same problems. Unanticipated events could undermine the bitcoin value as would a superior cryptocurrency could out-compete and replace a bitcoin. The potentialities for failure are infinite, but the one reason for its failure should not because policymakers did not understand its inner workings and possibilities. This article was originally posted on Technolocheese.
Edward Snowden’s NSA spying revelations highlighted just how much we have sacrificed to the gods of technology and convenience something we used to take for granted, and once considered a basic human right – our privacy.
It is not just the NSA. Governments the world over are racing to introduce legislation that allows to them to monitor and store every email, phone call and Instant Message, every web page visited, and every VoIP conversation made by every single one of their citizens.
The press has bandied about parallels with George Orwell’s dystopian world, ruled by an all-seeing Big Brother. They are depressingly accurate.
Encryption provides a highly effective way to protect your internet behavior, communications, and data. The main problem with using encryption is that its use flags you up to organizations such as the NSA for closer scrutiny.
Details of the NSA’s data collection rules were revealed by the guardian in 2013. What it boils down to is that the NSA examines data from US citizens, then discards it if it’s found to be uninteresting. Encrypted data, on the other hand, is stored indefinitely until the NSA can decrypt it.
The NSA can keep all data relating to non-US citizens indefinitely, but practicality suggests that encrypted data gets special attention.
If a lot more people start to use encryption, then encrypted data will stand out less, and surveillance organizations’ job of invading everyone’s privacy will be much harder.
How Secure is Encryption?
Following revelations about the scale of the NSA’s deliberate assault on global encryption standards, confidence in encryption has taken a big dent. So let’s examine the current state of play…
Encryption Key Length
Key length is the crudest way of determining how long a cipher will take to break. It is the raw number of ones and zeros used in a cipher. The crudest form of attack on a cipher is known as a brute force attack (or exhaustive key search). This involves trying every possible combination to find the correct one.
If anyone is capable of breaking modern encryption ciphers it is the NSA, but to do so is a considerable challenge. For a brute force attack:
A 128-bit key cipher has 3.4 x10(38) possible keys. Going through each of them would take thousands of operations or more to break.
In 2016 the most powerful supercomputer in the world was the NUDT Tianhe-2 in Guangzhou, China. Almost 3 times as fast as the Fujitsu K, at 33.86 petaflops, it would “only” take it around a third of a billion years to crack a 128-bit AES key.
That’s still a long time and is the figure for breaking just one key!
A 256-bit key would require 2(128) times more computational power to break than a 128-bit one.
The number of years required to brute force a 256-bit cipher is 3.31 x 10(56) – which is about 20000….0000 (total 46 zeros) times the age of Universe (13.5 billion or 1.35 x 10(10) years)!
Until the Edward Snowden revelations, people assumed that 128-bit encryption was in practice uncrackable through brute force. They believed it would be so for around another 100 years (taking Moore’s Law into account).
In theory, this still holds true. However, the scale of resources that the NSA seems willing to throw at cracking encryption has shaken many experts’ faith in these predictions. Consequently, system administrators the world over are scrambling to upgrade cipher key lengths.
If and when quantum computing becomes available, all bets will be off. Quantum computers will be exponentially more powerful than any existing computer and will make all current encryption ciphers and suites redundant overnight.
In theory, the development of quantum encryption will counter this problem. However, access to quantum computers will initially be the preserve of the most powerful and wealthy governments and corporations only. It is not in the interests of such organizations to democratize encryption.
For the time being, however, strong encryption is your friend.
Note that the US government uses 256-bit encryption to protect ‘sensitive’ data and 128-bit for ‘routine’ encryption needs. However, the cipher it uses is AES. As I discuss below, this is not without problems.
Encryption key length refers to the amount of raw numbers involved. Ciphers are the mathematics used to perform the encryption. It is weaknesses in these algorithms, rather than in the key length, that often leads to encryption breaking.
By far the most common ciphers that you will likely encounter are those OpenVPN uses: Blowfish and AES. In addition to this, RSA is used to encrypt and decrypt a cipher’s keys. SHA-1 or SHA-2 are used as hash functions to authenticate the data.
The most secure VPNs use an AES cipher. Its adoption by the US government has increased its perceived reliability, and consequently its popularity. However, there is reason to believe this trust may be misplaced.
The United States National Institute of Standards and Technology (NIST) developed and/or certified AES, RSA, SHA-1, and SHA-2. NIST works closely with the NSA in the development of its ciphers.
Given the NSA’s systematic efforts to weaken or build backdoors into international encryption standards, there is every reason to question the integrity of NIST algorithms.
NIST has been quick to deny any wrongdoing (“NIST would not deliberately weaken a cryptographic standard”). It has also has invited public participation in a number of upcoming proposed encryption-related standards in a move designed to bolster public confidence.
The New York Times, however, has accused the NSA of introducing undetectable backdoors, or subverting the public development process to weaken the algorithms, thus circumventing NIST-approved encryption standards.
News that a NIST-certified cryptographic standard – the Dual Elliptic Curve algorithm (Dual_EC_DRGB) had been deliberately weakened not just once, but twice, by the NSA destroyed pretty much any existing trust.
That there might be a deliberate backdoor in Dual_EC_DRGB had already been noticed before. In 2006 researchers at the Eindhoven University of Technology in the Netherlands noted that an attack against it was easy enough to launch on ‘an ordinary PC.’ Microsoft engineers also flagged up a suspected backdoor in the algorithm.
Despite these concerns, Microsoft, Cisco, Symantec, and RSA all include the algorithm in their products’ cryptographic libraries. This is in large part because compliance with NIST standards is a prerequisite to obtaining US government contracts.
NIST-certified cryptographic standards are pretty much ubiquitous worldwide throughout all areas of industry and business that rely on privacy (including the VPN industry). This is all rather chilling.
Perhaps because so much relies on these standards, cryptography experts have been unwilling to face up to the problem.
Perfect Forward Secrecy
One of the revelations in the information provided by Edward Snowden is that “another program, code-named Cheesy Name, was aimed at singling out SSL/TLS encryption keys, known as ‘certificates,’ that might be vulnerable to being cracked by GCHQ supercomputers.”
That these certificates can be “singled out” strongly suggests that 1024-bit RSA encryption (commonly used to protect the certificate keys) is weaker than previously thought. The NSA and GCHQ could, therefore, decrypt it much more quickly than expected.
In addition to this, the SHA-1 algorithm widely used to authenticate SSL/TLS connections is fundamentally broken. In both cases, the industry is scrambling fix the weaknesses as fast as it can. It is doing this by moving onto RSA-2048+, Diffie-Hellman, or Elliptic Curve Diffie-Hellman (ECDH) key exchanges and SHA-2+ hash authentication.
What these issues (and the 2014 Heartbleed Bug fiasco) clearly highlight is the importance of using perfect forward secrecy (PFS) for all SSL/TLS connections.
This is a system whereby a new and unique (with no additional keys derived from it) private encryption key is generated for each session. For this reason, it is also known as an ephemeral key exchange.
Using PFS, if one SSL key is compromised, this does not matter very much because new keys are generated for each connection. They are also often refreshed during connections. To meaningfully access communications these new keys would also need to be compromised. This makes the task so arduous as to be effectively impossible.
Unfortunately, it is common practice (because it’s easy) for companies to use just one private encryption key. If this key is compromised, then the attacker can access all communications encrypted with it.
OpenVPN and PFS
The most widely used VPN protocol is OpenVPN. It is considered very secure. One of the reasons for this is because it allows the use of ephemeral keys.
Sadly this is not implemented by many VPN providers. Without perfect forward secrecy, OpenVPN connections are not considered secure.
It is also worth mentioning here that the HMAC SHA-1 hashes routinely used to authenticate OpenVPN connections are not a weakness. This is because HMAC SHA-1 is much less vulnerable to collision attacks than standard SHA-1 hashes.
The Takeaway – So, is Encryption Secure?
To underestimate the NSA’s ambition or ability to compromise all encryption is a mistake. However, encryption remains the best defense we have against it (and others like it).
To the best of anyone’s knowledge, strong ciphers such as AES (despite misgivings about its NIST certification) and OpenVPN (with perfect forward secrecy) remain secure.
As Bruce Schneier, encryption specialist, at Harvard’s Berkman Center for Internet and Society, and privacy advocate famously stated,
“Trust the math. Encryption is your friend. Use it well, and do your best to ensure that nothing can compromise it. That’s how you can remain secure even in the face of the NSA.”
Remember, too that the NSA is not the only potential adversary. However, most criminals and even governments have nowhere near the NSA’s ability to circumvent encryption.
The Importance of End-to-end Encryption
End-to-end (e2e) encryption means that you encrypt data on your own device. Only you hold the encryption keys (unless you share them). Without these keys, an adversary will find it extremely difficult to decrypt your data.
Many services and products do not use e2e encryption. Instead, they encrypt your data and hold the keys for you. This can be very convenient, as it allows for easy recovery of lost passwords, syncing across devices, and so forth. It does mean, however, that these third parties could be compelled to hand over your encryption keys.
A case in point is Microsoft. It encrypts all emails and files held in OneDrive (formerly SkyDrive), but it also holds the encryption keys. In 2013 it used these to unlock the emails and files of its 250 million worldwide users for inspection by the NSA.
Strongly avoid services that encrypt your data on their servers, rather than you encrypting your own data on your own machine.
Although strong encryption has recently become trendy, websites have been using strong end-to-end encryption for the last 20 years. After all, if websites were not secure, then online shopping or banking wouldn’t be possible.
The encryption protocol used for this is HTTPS, which stands for HTTP Secure (or HTTP over SSL/TLS). It is used for websites that need to secure users’ communications and is the backbone of internet security.
When you visit a non-secure HTTP website, data is transferred unencrypted. This means anyone watching can see everything you do while visiting that site. This includes your transaction details when making payments. It is even possible to alter the data transferred between you and the webserver.
With HTTPS, a cryptographic key exchange occurs when you first connect to the website. All subsequent actions on the website are encrypted, and thus hidden from prying eyes. Anyone watching can see that you have visited a certain website, but cannot see which individual pages you read, or any data transferred.
For example, the ProPrivacy.com website is secured using HTTPS. Unless you are using a VPN while reading this web page, your ISP can see that you have visited www.ProPrivacy.com, but cannot see that you are reading this particular article. HTTPS uses end-to-end encryption.
It is easy to tell if you visit a website secured by HTTPS – just look for a locked padlock icon to the left of the main URL/search bar.
There are issues relating to HTTPS, but in general, it is secure. If it wasn’t, none of the billions of financial transactions and transfers of personal data that happen every day on the internet would be possible. The internet itself (and possibly the world economy!) would collapse overnight.
An important limitation to encryption is that it does not necessarily protect users from the collection of metadata.
Even if the contents of emails, voice conversations, or web browsing sessions cannot be easily monitored, knowing when, where, from whom, to whom, and how regularly such communication takes place can tell an adversary a great deal. This is a powerful tool in the wrong hands.
For example, even if you use a securely encrypted messaging service such as WhatsApp, Facebook will still be able to tell who you are messaging, how often you message, how long you usually chat for, and more.
Although the NSA does target individual communications, its primary concern is the collection of metadata. As NSA General Counsel Stewart Baker has openly acknowledged,
“Metadata absolutely tells you everything about somebody’s life. If you have enough metadata, you don’t really need content.“
Technologies such as VPNs and Tor can make the collection of metadata very difficult. For example, an ISP cannot collect metadata relating to the browsing history of customers who use a VPN to hide their online activities.
Note, though, that many VPN providers themselves log some metadata. This should be a consideration when choosing a service to protect your privacy.
Please also note that mobile apps typically bypass any VPN that is running on your device and connect directly to their publishers’ servers. Using a VPN, for example, will not prevent WhatsApp sending metadata to Facebook.
Identify Your Threat Model
When considering how to protect your privacy and stay secure on the internet, carefully consider who or what worries you most. Defending yourself against everything is almost impossible. And any attempt to do so will likely seriously degrade the usability (and your enjoyment) of the internet.
Identifying to yourself that being caught downloading an illicit copy of Game of Thrones is a bigger worry than being targeted by a crack NSA TAO team for personalized surveillance is a good start. It will leave you less stressed, with a more useable internet and with more effective defenses against the threats that really matter to you.
Of course, if your name is Edward Snowden, then TAO teams will be part of your threat model…
Use FOSS Software
The terrifying scale of the NSA’s attack on public cryptography, and its deliberate weakening of common international encryption standards has demonstrated that no proprietary software can be trusted. Even software specifically designed with security in mind.
The NSA has co-opted or coerced hundreds of technology companies into building backdoors into their programs, or otherwise weakening security in order to allow it access. US and UK companies are particularly suspect, although the reports make it clear that companies across the world have acceded to NSA demands.
The problem with proprietary software is that the NSA can fairly easily approach and convince the sole developers and owners to play ball. In addition to this, their source code is kept secret. This makes it easy to add to or modify the code in dodgy ways without anyone noticing.
The best answer to this problem is to use free open-source software (FOSS). Often jointly developed by disparate and otherwise unconnected individuals, the source code is available to everyone to examine and peer-review. This minimizes the chances that someone has tampered with it.
Ideally, this code should also be compatible with other implementations, in order to minimize the possibility of a backdoor being built in.
It is, of course, possible that NSA agents have infiltrated open-source development groups and introduced malicious code without anyone’s knowledge. In addition, the sheer amount of code that many projects involve means that it is often impossible to fully peer-review all of it.
Despite these potential pitfalls, FOSS remains the most reliable and tamper-proof software available. If you truly care about privacy you should try to use it exclusively (up to and including using FOSS operating systems such as Linux).
Steps You Can Take to Improve Your Privacy
With the proviso that nothing is perfect, and if “they” really want to get you “they” probably can, there are steps you can take to improve your privacy.
Pay for Stuff Anonymously
One step to improving your privacy is to pay for things anonymously. When it comes to physical goods delivered to an actual address, this isn’t going to happen. Online services are a different kettle of fish, however.
It is increasingly common to find services that accept payment through Bitcoin and the like. A few, such as VPN service Mullvad, will even accept cash sent anonymously by post.
Bitcoin is a decentralized and open-source virtual currency that operates using peer-to-peer technology (much as BitTorrent and Skype do). The concept is particularly revolutionary and exciting because it does not require a middleman to work (for example a state-controlled bank).
Whether Bitcoins represent a good investment opportunity remains hotly debated and is not within the remit of this guide. It is also completely outside of my area of expertise!
It can also make a handy anti-censorship tool. However, many governments go to great lengths to counter this by blocking access to the network (with varied success).
Tor is a vital tool for internet users who require the maximum possible anonymity. VPNs, however, are a much more practical privacy tool for day-to-day internet use.
Other Ways To Stay Private Online
VPN and Tor are the most popular ways to maintain anonymity and evade censorship online, but there are other options. Proxy servers, in particular, are quite popular. In my opinion, however, they are inferior to using a VPN.
Other services which may be of interest include JonDonym, Lahana, I2P and Psiphon. You can combine many such services with Tor and/or VPN for greater security.
It’s not just the NSA who are out to get you: advertisers are too! They use some very sneaky tactics to follow you around the web and build a profile of you in order to sell you stuff. Or to sell this information to others who want to sell you stuff.
Most people who care are aware of HTTP cookies and how to clear them. Most browsers also have a Private Browsing mode that blocks cookies and prevents the browser from saving your internet history.
It is a good idea always to surf using Private Browsing. But this alone is not enough to stop organizations from tracking you across the internet. Your browser leaves many other traces as it goes.
Clear Cached DNS Entries
To speed up internet access, your browser caches the IP address it receives from your default DNS server (see the section on changing your DNS server later).
In Windows, you can see cached DNS information by typing “ipconfig /displaydns” at the command prompt (cmd.exe).
To clear the DNS cache in Windows, open the command prompt window and type: ipconfig /flushdns [enter]
Clear the cache in OSX 10.4 and under by opening Terminal and typing: lookupd -flushcache [enter]
To clear the cache in OSX 10.5 and above, open Terminal and type: dscacheutil -flushcache [enter]
Clear Flash Cookies
A particularly insidious development is the widespread use of Flash cookies. Disabling cookies in your browser does not always block them, although modern browsers do.
These can track you in a similar manner to regular cookies. They can be located and manually deleted from the following directories:
Windows: C:Users[username]AppDataLocal\MacromediaFlash Player #SharedObjects
macOS: [User directory] /Library/Preferences/Macromedia/Flash Player/#SharedObjects and [User directory] /Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer/sys/
A better tactic, however, is to use the CCleaner utility (available for Windows and macOS). This cleans out pesky Flash cookies. It also cleans out a host other rubbish that slows your computer down and leaves traces of your activity behind. To do this, you need to properly configure CCleaner.
Thanks to a growing awareness of Flash cookies, including so-called “zombie cookies” (bits of persistent Flash code which respawn regular cookies when they are modified or deleted), and the fact that most modern browsers include Flash cookies as part of their regular cookie control features, the use of Flash cookies is declining. They still represent a serious threat, however.
Other Web Tracking Technologies
Internet companies are making far too much money to take this user backlash against tracking lying down. They are therefore deploying a number of increasingly devious and sophisticated tracking methods.
The way in which your browser is configured (especially the browser plugins used), together with details of your Operating System, allows you to be uniquely identified (and tracked) with a worryingly high degree of accuracy.
A particularly insidious (and ironic) aspect of this is that the more measures you take to avoid tracking (for example by using the plugins listed below), the more unique your browser fingerprint becomes.
The best defense against browser fingerprinting is to use as common and plain vanilla an OS and browser as possible. Unfortunately, this leaves you open to other forms of attack. It also reduces the day-to-day functionality of your computer to such an extent that most of us will find the idea impractical.
The more browser plugins you use, the more unique your browser is. Drat!
Using the Tor browser with Tor disabled is a partial solution to this problem. This will help make your fingerprint look identical to all other Tor users, while still benefiting from the additional hardening built into the Tor browser.
In addition to browser fingerprinting, other forms of fingerprinting are becoming more common. The most prominent of these is canvas fingerprinting, although audio and battery fingerprinting are also possible.
HTML5 Web Storage
Built into HTML5 (the much-vaunted replacement to Flash) is web storage, also known as DOM (Document Object Model) storage. Creepier and much more powerful than cookies, web storage is an analogous way of storing data in a browser.
It is much more persistent, however, and has a much greater storage capacity. It also cannot normally be monitored, read, or selectively removed from your web browser.
All browsers enable web storage by default, but you can turn it off in Firefox and Internet Explorer.
Firefox users can also configure the BetterPrivacy add-on to remove web storage automatically on a regular basis. Chrome users can use the Click&Clean extension.
Remember that using these add-ons will increase your browser fingerprint uniqueness.
Part of HTTP, the protocol for the World Wide Web, ETags are markers used by your browser to track resource changes at specific URLs. By comparing changes in these markers with a database, websites can build up a fingerprint, which can be used to track you.
ETags can also be used to respawn (zombie-style) HTTP and HTML5 cookies. And once set on one site, they can be used by associate companies to track you as well.
This kind of cache tracking is virtually undetectable, so reliable prevention is very hard. Clearing your cache between each website you visit should work, as should turning off your cache altogether.
These methods are arduous, however, and will negatively impact your browsing experience. The Firefox add-on Secret Agent prevents tracking by ETags, but, again, will likely increase your browser fingerprint (or because of the way it works, maybe not).
Now we start to get really scary. History stealing (also known as history snooping) exploits the web’s design. It allows a website you visit to discover your past browsing history.
The bad news is that this information can be combined with social network profiling to identify you. It is also almost impossible to prevent.
The only good news here is that social network fingerprinting, while scarily effective, is not reliable. If you mask your IP address with a good VPN (or Tor) then you will be a long way towards disassociating your real identity from your tracked web-behavior.
Browser Extensions for privacy
Pioneered by Firefox, all modern browsers now support a host of extensions. Many of these aim to improve your privacy while surfing the internet. Here is a list of my favorites that I don’t think anyone should surf without:
A lightweight FOSS ad-blocker that does double duty as an anti-tracking add-on. Chrome and Internet Explorer/Edge users can instead use Ghostery. Many users find this commercial software’s funding model to be somewhat shady, however.
Developed by the Electronic Frontier Foundation (EFF), this is a great FOSS anti-tracking add-on that does double-duty as an ad-blocker. It is widely recommended to run Privacy Badger and uBlock Origin together for maximum protection.
Another essential tool from EFF. HTTPS Everywhere tries to ensure that you always connect to a website using a secure HTTPS connection if one is available.
Self-Destructing Cookies (Firefox)
Automatically deletes cookies when you close the browser tab that set them. This provides a high level of protection from tracking via cookies without “breaking” websites. It also provides protection against Flash/zombie cookies and ETags, and cleans DOM storage.
This is an extremely powerful tool that gives you unparalleled control over which scripts you run on your browser. However, many websites will not play game with NoScript, and it requires a fair bit of technical knowledge to configure and tweak it to work the way that you want it to.
It is easy to add exceptions to a whitelist, but even this requires some understanding of the risks that might be involved. Not for the casual user then, but for web-savvy power-users, NoScript is difficult to beat. ScriptSafe for Chrome performs a similar job.
The last one is particularly worth paying attention to. It is worth keeping NoScript installed even if you “Allow Scripts Globally,” as this still protects against nasty things such as cross-site scripting and clickjacking.
Developed by the team behind uBlock Origin, uMatrix is something of a half-way house between that add-on and NoScript. It provides a great deal of customizable protection, but requires a fair bit of work and know-how to set up correctly.
Note that if you use either NoScript or uMatrix then it is not necessary to also use uBlock Origin and Privacy Badger.
In addition to these extensions, most modern browsers (including mobile ones) include a Do Not Track option. This instructs websites to disable tracking and cross-site tracking when you visit them.
It is definitely worth turning this option on. However, implementation is purely voluntary on behalf of website owners, so no guarantee of privacy.
This is not an exhaustive list of all the great privacy-related browser extensions out there.
I also have an article on how you make Firefox even more secure by changing settings in about:config.
As noted above, you should be aware that using any browser plugin increases the uniqueness of your browser. This makes you more susceptible to being tracked by browser fingerprinting.
Block “Reported Attack Sites” and “Web Forgeries” in Firefox
These setting can be very useful for protecting you against malicious attacks, but impact your privacy by sharing your web traffic in order to work. If the tracking issues outweigh the benefits for you, then you might want to disable them.
Mobile Browser Security
The above extension list concentrates on desktop browsers. It is just as important to protect our browsers on smartphones and tablets.
Unfortunately, most mobile browsers have a great deal of catching-up to do in this regard. Many Firefox extensions, however, will work on the mobile version of the browser. These include:
To install these add-ons in Firefox for Android or Firefox for iOS, visit Options ->Tools -> Add-ons -> Browse all Firefox Add-ons, and search from them.
Thankfully Private Browsing, Do Not Track, and advanced cookie management are becoming increasingly common on all mobile browsers.
Use a Search Engine that Doesn’t Track You
Most search engines, including Google (in fact particularly Google), store information about you. This includes:
Your IP address.
Date and time of the query.
Query search terms.
Cookie ID – this cookie is deposited in your browser’s cookie folder, and uniquely identifies your computer. With it, a search engine provider can trace a search request back to your computer.
The search engine usually transmits this information to the requested web page. It also transmits it to the owners of third-party advertising banners on that page. As you surf the internet, advertisers build up a (potentially embarrassing and highly inaccurate) profile of you.
This is then used to target adverts tailored to your theoretical needs.
In addition to this, governments and courts around the world regularly request search data from Google and other major search engines. This is usually duly handed over. For more details, see the Google Transparency Report on the number of User Data Requests received, and the number (at least partially) acceded to.
There are some search engines, however, that does not collect users’ data. These include:
One of the best-known private search engines, DuckDuckGo pledges not to track its users. Each search event is anonymous. While in theory, an infiltrator could track them, there is no profile attached for them to access.
DuckDuckGo says that it would comply with ordered legal requests, but as it doesn’t track users, “there is nothing useful to give them.” I have found DuckDuckGo to be very good, and through the use of “bangs”, it can also be made to search most other popular search engines anonymously too.
Unfortunately, many users do not find DDG’s search results to be as good as those returned by Google. The fact that it is a US-based company also concerns some.
Another popular Google alternative is StartPage. It is based in the Netherlands and returns Google search engine results. StartPage anonymizes these Google searches and promises not to store or share any personal information or use any identifying cookies.
The above search engines rely on trusting the search engine providers to maintain your anonymity. If this really worries you, then you might like to consider YaCy. It is a decentralized, distributed search engine, built using P2P technology.
This is a fantastic idea, and one that I really hope takes off. For now, however, it is more of an exciting curiosity than a fully-fledged and useful Google alternative.
The Filter Bubble
An added benefit of using a search engine that does not track you is that it avoids the “filter bubble” effect. Most search engines use your past search terms (and things you “Like” on social networks) to profile you. They can then return results they think will interest you.
This can result in you only receiving search returns that agree with your point of view. This locks you into a “filter bubble.” You do not get to see alternative viewpoints and opinions because they are downgraded in your search results.
This denies you access to the rich texture and multiplicity of human input. It is also very dangerous, as it can confirm prejudices and prevent you from seeing the “bigger picture.”
Delete Your Google History
You can view the information Google collects about you by signing in to your Google account and visiting My Activity. From here you can also Delete by topic or product. Since you are reading this privacy Guide, you will probably want to Delete -> All time.
Of course, we only have Google’s word that they really delete this data. But it certainly can’t hurt to do this!
In order to prevent Google continuing to collect new information about you, visit Activity Controls. From here you can tell Google to stop collecting information on your use of various Google services.
These measures won’t stop someone who is deliberately spying on you from harvesting your information (such as the NSA). But it will help stop Google from profiling you.
Even if you plan on changing to one of the “no tracking” services listed above, most of us have built up a substantial Google History already, which anyone reading this article will likely want to be deleted.
Of course, deleting and disabling your Google history will mean that many Google services which rely on this information to deliver their highly personalized magic will either cease to function, or not function as well. So say goodbye to Google Now!
Secure Your Email
Most email services provide a secure HTTPS connection. Google has even led the way in fixing the main weakness in SSL implementation. They are therefore secure email services. However, this is no good if the email service simply hands over your information to an adversary, as Google and Microsoft did with the NSA!
The answer lies in end-to-end email encryption. This is where the sender encrypts the email, and only the intended recipient can decrypt it. The biggest problem with using an encrypted email system is that you cannot impose it unilaterally. Your contacts – both recipients and senders – also need to play ball for the whole thing to work.
Trying to convince your granny to use PGP encryption will likely just lead to bafflement. Meanwhile trying to convince your customers to use it might make many of them very suspicious of you!
Most people regard Pretty Good Privacy (PGP) as the most secure and private way to send and receive emails. Unfortunately, PGP is not easy to use. At all.
This has resulted in a very low number of people willing to use PGP (basically just a few crypto-geeks).
With PGP, only the body of a message is encrypted, but the header, recipient, send time, and so forth, is not. This metadata can still be very valuable to an adversary, even if it can’t read the actual message.
Despite its limitations, PGP remains the only way to send email very securely.
GNU Privacy Guard
PGP was once open-source and free, but is now the property of Symantec. The Free Software Foundation has taken up the open source OpenPGP banner, however, and with major funding from the German government has released GNU Privacy Guard (also known as GnuPG or just GPG).
GnuPG is a free and open source alternative to PGP. It follows the OpenPGP standard and is fully compatible with PGP. It is available for Windows, macOS, and Linux. When referring to PGP, most people these days (including myself) mean GnuPG.
Generating a PGP key pair in Gpgwin
Although the basic program uses a simple command-line interface, more sophisticated versions are available for Windows (Gpg4win) and Mac (GPGTools). Alternately, EnigMail adds GnuPG functionality to the Thunderbird and SeaMonkey stand-alone email clients.
PGP on Mobile Devices
Android users should be pleased to know that an Alpha release GnuPG: Command-Line from the Guardian Project is available.
K-9 Mail is a well-regarded email client for Android with PGP support built in. It can be combined with Android Privacy Guard to provide a more user-friendly PGP experience. iOS users can give iPGMail a try.
Use PGP with Your Existing Webmail Service
PGP is a real pain to use. Such a big pain, in fact, that few people bother. Mailvelope is a browser extension for Firefox and Chrome that allows end-to-end PGP encryption within your browser.
It works with popular browser-based webmail services such as Gmail, Hotmail, Yahoo! and GMX. It makes using PGP about as painless as it gets. However, it is not as secure as using PGP with a dedicated email client.
Use a Dedicated Encrypted Webmail Service
Encrypted webmail services with a privacy focus have proliferated over the last two years or so. The most notable of these are ProtonMail and Tutanota. These are much easier to use than PGP and, unlike PGP, hide emails’ metadata. Both services now also allow non-users to securely reply to encrypted emails sent to them by users.
Protonmail is much more secure than most webmail services.
The bottom line with such services is they are as easy to use as Gmail, while being much more private and secure. They will also not scan your emails to sell you stuff. However, never regard them as being anywhere near as secure as using PGP with a stand-alone email program.
Other Email Privacy Precautions
I discuss encrypting files and folders elsewhere. However, it is worth noting here that if you just wish to protect files, you can encrypt these before sending them by regular email.
It is also possible to encrypt stored emails by encrypting the email storage folder using a program such as VeraCrypt (discussed later). This page explains where Thunderbird stores email on different platforms (for example).
At the end of the day, emails are an outdated communications system. And when it comes to privacy and security, email is fundamentally broken. End-to-end encrypted VoIP and instant messaging are much more secure ways to communicate online.
Secure Your Voice Conversations
Regular phone calls (landline or mobile) are never secure, and you cannot make them so. It’s not the just the NSA and GCHQ; governments everywhere (where they have not already done so) are keen on recording all citizens’ phone calls.
Unlike emails and internet use, which can be obfuscated (as this article tries to show), phone conversations are always wide open.
Even if you buy anonymous and disposable “burner phones” (behavior which marks you out as either worryingly paranoid or engaged in highly criminal activity), a lot of information can be gathered through the collection of metadata.
Burner phones are also totally pointless unless the people you’re calling are equally paranoid and also using burner phones.
VoIP with End-to-end Encryption
If you want to keep your voice conversations completely private, then you need to use VoIP with end-to-end encryption (except, of course, when talking in person).
VoIP (Voice over Internet Protocol) apps allow you to talk over the internet. They often also allow you to make video calls and send Instant Messages.VoIP services allowing cheap or free calls anywhere in the world and have thus become extremely popular. Skype, in particular, has become a household name.
Unfortunately, Skype is now owned by Microsoft. It has perfectly demonstrated the problem with most such services (which is a very similar problem to that with email). VoIP connections to and from a middleman may be secure, but if the middleman just hands over your conversations to the NSA or some other government organization, this security is next to meaningless.
So, as with email, what is needed is end-to-end encryption where an encrypted tunnel is created directly between the participants in a conversation. And no-one else.
Good Skype Alternatives
Signal (Android, iOS) – in addition to being probably the most secure Instant Messaging (IM) app currently available (see below), Signal allows you to make secure VoIP calls.
As with messaging, Signal leverages your regular address book. If a contact also uses Signal then you can start an encrypted VoIP conversation with them. If a contact does not use Signal then you can either invite them to use the app, or talk with them using your regular insecure cellular phone connection.
The encryption Signal uses for VoIP calls is not as strong as the encryption it uses for text messaging. This is probably due to the fact that encrypting and decrypting data uses processing power, so stronger encryption would negatively impact the quality of calls.
For most purposes, this level of encryption should be more than sufficient. But if very high levels of privacy are required then you should probably stick to text messaging instead.
Jitsi (Windows, macOS, Linux, Android) – this free and open-source software offers all the functionality of Skype. Except everything is encrypted using ZRTP. This includes voice calls, videoconferencing, file transfer, and messaging.
The first time you connect to someone it can take a minute or two to set up the encrypted connection (designated by a padlock). But the encryption is subsequently transparent. As a straight Skype replacement for the desktop, Jitsi is difficult to beat.
Secure Your Text Messages
This section has a great deal of cross-over with the previous one on VoIP. Many VoIP services, including both Signal and Jitsi, also have chat/IM functionality built in.
Signal (Android, iOS) – developed by crypto-legend Moxie Marlinspike, Signal is widely regarded as the most secure text messaging app available. It is not without issues, but Signal is about as good as it currently gets when it comes to having a secure and private conversation (except whispering to someone in person, of course!).
Signal replaces your phone’s default text messaging app, and uses your phone’s regular contact list. If a contact also uses Signal then any messages sent to or received from them are securely end-to-end encrypted.
If a contact does not use Signal then you can invite them to use the app, or just send an unencrypted text message via regular SMS. The beauty of this system is that Signal is almost transparent in use, which should make it easier to convince friends, family and colleagues to use the app!
Jitsi (Windows, macOS, Linux, Android (experimental)) – is a great desktop messenger app, and is very secure. It is almost certainly not quite as secure as Signal, however.
A Note on WhatsApp
The very popular WhatsApp app now uses the same end-to-end encryption developed for Signal. Unlike Signal, however, WhatsApp (owned by Facebook) retains metadata and has other weaknesses not present in the Signal app.
Despite these issues, most of your contacts likely use WhatsApp and are unlikely to be convinced to switch to Signal. Given this all-too-common situation, WhatsApp provides vastly improved security and privacy that your contacts might actually use.
Unfortunately, this argument has been badly undermined by a recent announcement that WhatsApp will start sharing users’ address books with parent company Facebook by default. This can be disabled, but the vast majority of users will not bother to do so.
Ditch the Cell Phone!
While we are on the subject of phones, I should also mention that when you carry your phone, your every movement can be tracked. And it’s not just by things such as GPS and Google Now/Siri.
Phone towers can easily track even the most modest cell phone. In addition to this, use of Stingray IMSI-catchers has proliferated among police forces the world over.
These devices mimic cell phone towers. They can not only uniquely identify and track individual cell phones, but can intercept phone calls, SMS messages, and unencrypted internet content.
Using an end-to-end encrypted messaging app such as Signal will prevent such interception. However, if you don’t want to be uniquely identified by your phone and tracked, the only real solution is to leave your phone at home.
Secure Your Cloud Storage
As internet speeds increase, server-level storage becomes cheaper, and the different devices we use to access the internet more plentiful, it is becoming increasingly clear that cloud storage is the future.
The problem, of course, is ensuring that files stored in the “the cloud” remain secure and private. And here the big players have proven themselves woefully inadequate. Google, Dropbox, Amazon, Apple, and Microsoft have all worked in cahoots with the NSA. They also in their terms and conditions reserve the right to investigate your files and hand them over to the authorities if they receive a court order.
To ensure that your files are secure in the cloud, there are a number of approaches you can take.
Manually Encrypt Your Files Before Uploading Them to the Cloud
The simplest and most secure method is to manually encrypt your files using a program such as VeraCrypt or EncFS. This has the advantage that you can carry on using your favorite cloud storage service, no matter how inherently insecure it is, as you hold all the encryption keys to your files.
As discussed later, mobile apps that can handle VeraCrypt or EncFS files exist, allowing for synchronization across devices and platforms. Features such as file versioning will not work with individual files as the encrypted container hides them, but it is possible to recover past versions of the container.
If you are in the market for a good Dropbox alternative, you may like to check out ProPrivacy’s sister website BestBackups. It features news and reviews of the best and the rest when it comes to cloud storage services.
Use an Automatically Encrypted Cloud Service
These services automatically encrypt files before uploading them to the cloud. Avoid any service that encrypts files server-side, as these are vulnerable to being decrypted by the service provider.
Any changes to files or folders sync with locally decrypted versions before being secured and sent to the cloud.
All services listed below have iOS and Android apps, so you can easily sync across your computers and mobile devices. This convenience comes at a small security price, as the services briefly store your password on their servers to authenticate you and direct you to your files.
TeamDrive – this German cloud backup and file synchronization service is primarily aimed at businesses. It also offers free and low-cost personal accounts. TeamDrive uses proprietary software, but has been certified by the Independent Regional Centre for Data Protection of Schleswig-Holstein.
Tresorit– is based in Switzerland, so users benefit from that country’s strong data protection laws. It provides client-side encryption, although a kink is that users’ data is stored on Microsoft Windows Azure servers. Given widespread distrust of all things US, this is an odd choice. But as client-side encryption ensures the cryptographic keys are kept with the user at all times, it shouldn’t be a problem.
SpiderOak– available for all major platforms, SpiderOak offers a “zero knowledge,” secure, automatically encrypted cloud service. It uses a combination of 2048 bit RSA and 256 bit AES to encrypt your files.
Note that all of these cloud services are closed source. This means that we just have to trust them to do what they claim to do (although TeamDrive has been independently audited).
Use Syncthing for Cloudless Syncing
Syncthing is a secure decentralized peer-to-peer (P2P) file synchronization program that can sync files between devices on a local network or over the internet.
Acting more or less as a Dropbox replacement, Syncthing synchronizes files and folders across devices, but does so without storing them in ‘the cloud.’ In many ways, it is therefore similar to BitTorrent Sync, except that it is completely free and open-source (FOSS).
Syncthing allows you to securely backup data without the need to trust a third-party cloud provider. Data is backed up to a computer or server that you directly control, and is at no point stored by a third party.
This is referred to in techie circles as a “BYO (cloud) model,” where you provide the hardware, instead of a third-party commercial vendor. The encryption used is also fully end-to-end, as you encrypt it on your device, and only you can decrypt it. Nobody else holds the encryption keys.
A limitation of the system is that, as it is not a true cloud service, it cannot be used as an extra drive by portable devices with limited storage. On the plus side, however, you are using your own storage, and so are not tied to cloud providers’ data limits (or charges).
Encrypt Your Local Files, Folders, and Drives
While the focus of this document is on internet security and privacy, an important aspect of securing your digital life is to ensure that locally stored files cannot be accessed by unwanted parties.
Of course, it is not just about local storage. You can also encrypt files before emailing them or uploading them to cloud storage.
Windows, Mac macOS, Linux. Mobile support for VeraCrypt containers is available via third-party apps.
VeraCrypt is an open-source full-disk encryption program. With VeraCrypt you can:
Create a virtual encrypted disk (volume) which you can mount and use just like a real disk (and which can be made into a Hidden Volume).
Encrypt an entire partition or storage device (for example a hard drive or USB stick).
Create a partition or storage drive containing an entire operating system (which can be hidden).
All encryption is performed on-the-fly in real-time, making VeraCrypt transparent in operation. The ability to create hidden volumes and hidden operating systems provides plausible deniability, as it should be impossible to prove they exist (as long as all the correct precautions are taken).
This nifty little cross-platform app is very handy for encrypting individual files. Although only individual files can be encrypted, this limitation can be overcome somewhat by creating zip files out of folders, and then encrypting the zip file with AES Crypt.
Full Disk Encryption on Mobile Devices
All new iPhones and iPads now ship with full disk encryption. Some Android devices do as well. If not, you can manually turn it on. Please see How to Encrypt your Android Phone for more details.
Use Antivirus/Anti-malware and Firewall Software
Note: ProPrivacy has a sister site dedicated to anti-virus software – BestAntivirus.com. If you would like to choose an antivirus package to fit your needs, please take the time to check it out! Now, back to the guide…
It almost goes without saying, but as this is an “ultimate guide”, I’ll say it anyway:
Always use anti-virus software, and make sure that it is up-to-date!
Not only can viruses really screw up your system, but they can let hackers enter it. This gives them access to all your (unencrypted) files and emails, webcam, passwords stored in Firefox (if no master password is set), and much more. Keyloggers are particularly dangerous as they can be used to access bank details and track pretty much everything you do on your computer.
It is also worth remembering that not just criminal hackers use viruses! The Syrian government, for example, launched a virus campaign known as Blackshade aimed at ferreting out and spying on political dissidents.
Most people are aware they should be using anti-virus software on their desktop computers, but many neglect their mobile devices. While there are fewer viruses targeting mobile devices at present, smartphones and tablets are sophisticated and powerful computers. As such, they are vulnerable to attack by viruses and need to be protected.
Mac users are famously bad for not installing ant-virus software, citing the “fact” that macOS’s Unix architecture makes virus attacks difficult (this is hotly contested by the way), the fact that most hackers concentrate on Windows because most computers use Windows (true), and the anecdotal evidence of many Mac users who have gone for years without using anti-virus software yet never experienced any problems.
This an illusion, however. Macs are not immune to viruses, and anyone serious about their security should always use good anti-virus software.
Free Vs. Paid-for Antivirus Software
The generally agreed consensus is that free antivirus software is as good at preventing viruses as paid-for alternatives. But paid-for software provides better support and more comprehensive “suites” of software. These are designed to protect your computer from a range of threats, for example by combining antivirus, anti-phishing, anti-malware and firewall functions.
Similar levels of protection are available for free but require the use of various different programs. Also, most free software is for personal use only, and businesses are usually required to pay for a license. A bigger concern, however, is how publishers can afford to offer free anti-virus products. AVG, for example, can sell users’ search and browser history data to advertisers in order to “make money” from its free antivirus software.
Although I recommend free products below (as most major anti-virus products have a free version), it may therefore be a very good idea to upgrade to a premium version of the software.
Good Anti-virus Software Options
Windows – the most popular free antivirus programs for Windows are Avast! Free Antivirus and AVG AntiVirus Free Edition (which I recommend avoiding for the reason above). Plenty of others are also available. Personally, I use the built-in Windows Defender for real-time protection, plus run a weekly manual scan using Malwarebytes Free. A paid-for version of Malwarebytes is also available that will do this automatically, plus provide real-time protection.
macOS–Avast! Free Antivirus for Mac is well regarded, although other decent free options are available. In fact, free software is better regarded than paid-for options, so I just recommend using one of them!
Android – again, there are a number of options, both free and paid for. I use Malwarebytes because it is nice and lightweight. Avast! is more fully-featured, however, and includes a firewall.
iOS – Apple is still in denial about the fact that iOS is as vulnerable as any other platform to virus attacks. Indeed, in a move that is as alarming as it is bizarre, it seems that Apple has purged the Store of antivirus apps! I, certainly, have been unable to find any iOS antivirus apps. A VPN will help somewhat as a VPN for iPhone will encrypt your data and protect you from hackers and surveillance.
Linux – the usual suspects: Avast! and Kaspersky are available for Linux. These work very well.
A personal firewall monitors network traffic to and from your computer. It can be configured to allow and disallow traffic based on a set of rules. In use, they can be a bit of pain, but they do help ensure that nothing is accessing your computer and that no program on your computer is accessing the internet when it shouldn’t be.
Both Windows and Mac ship with built-in firewalls. These are, however, only one-way firewalls. They filter incoming traffic, but not outgoing traffic. This makes them much more user-friendly than true two-way firewalls but much less effective, as you cannot monitor or control what programs (including viruses) already installed on your computer are doing.
The biggest problem with using a two-way firewall is determining which programs are ‘ok’ to access the internet and which are potentially malicious. Perfectly legitimate Windows processes can, for instance, appear pretty obscure. Once set up, however, they become fairly transparent in use.
Some Good Two-way Firewall Programs
Windows – Comodo Firewall Free and ZoneAlarm Free Firewall are free and good. Another approach is to use TinyWall. This very lightweight free program is not a firewall per se. It instead adds the ability to monitor outgoing connections to the built-in Windows Firewall.
Glasswire is also not a true Firewall because it does not allow you to create rules or filters, or block specific IP connections. What it does do is present network information in a beautiful and clear manner. This makes it easy to understand what is going on, and therefore easier to make informed decisions about how to deal with it.
macOS– Little Snitch adds the ability to monitor outgoing connections to the built-in macOS firewall. It is great, but is a little pricey at $25.
Android – as noted above, the free Avast! for Android app includes a firewall.
iOS – the only iOS firewall I know of is Firewall iP. It requires a jailbroken device to run.
Linux – there are many Linux firewall programs and dedicated firewall distros available. iptables is bundled with just about every Linux distro. It is an extremely flexible firewall utility for anyone who cares to master it.
As I noted near the beginning of this guide, no commercial software can be trusted not to have a back-door built into it by the NSA.
A more secure alternative to Windows (especially Windows 10!) or macOS is Linux. This is a free and open-source operating system. Note, though, that some builds incorporate components which are not open source.
It is far less likely that Linux has been compromised by the NSA. Of course, that’s not to say that the NSA hasn’t tried. It is a much more stable and generally secure OS than its commercial rivals.
TAILS is a secure Linux distro favored by Edward Snowden. The default browser is IceWeasel, a Firefox spinoff for Debian that has been given the full Tor Browser Bundle treatment.
Despite great strides made in the right direction, Linux, unfortunately, remains less user-friendly than either Windows or macOS. Less computer-literate users may, therefore, struggle with it.
If you are serious about privacy, however, Linux is the way forward. One of the best things about it is that you can run the entire OS from a Live CD, without the need to install it. This makes it easy to try out different Linux distros. It also adds an extra layer of security when you access the internet.
This is because the OS exists completely separately from your regular OS. The temporary OS could be compromised, but as it exists only in RAM and disappears when you boot back into your normal OS, this is not a major problem.
Example Linux Distributions
There are hundreds of Linux distros out there. These range from full desktop replacements to niche distributions.
Ubuntu – is a very popular Linux distro due to the fact that it is one of the easiest to use. There is a great deal of assistance available for it from an enthusiastic Ubuntu community. It, therefore, makes a good starting point for those interested in using a much more secure operating system.
Mint – is another popular Linux distro aimed at novice users. It is much more Windows-like than Ubuntu, so Windows refugees are often more comfortable using it than Ubuntu. Mint is built on top of Ubuntu, so most Ubuntu-specific tips and programs also work in Mint. This includes VPN clients.
Debian – Mint is based on Ubuntu, and Ubuntu is based on Debian. This highly flexible and customizable Linux OS is popular with more experienced users.
Tails – is famously is the OS of choice for Edward Snowden. It is very secure, and routes all internet connections through the Tor network. It is, however, a highly specialized privacy tool. As such, it makes a poor general purpose desktop replacement to Windows or macOS.
Ubuntu, Mint and Debian all make great, user-friendly desktop replacements to Windows and macOS. Ubuntu and Mint are widely recommended as good starting points for Linux newbies.
Use a Virtual Machine (VM)
An additional level of security can be achieved by only accessing the internet (or only accessing it for certain tasks) using a ‘virtual machine.’ These are software programs that emulate a hard drive onto which an operating system such as Windows or Linux is installed. Note that VM-ing macOS is tricky.
This effectively emulates a computer through software, which runs on top of your normal OS.
The beauty of this approach is that all files are self-contained within the virtual machine. The “host” computer cannot be infected by viruses caught inside the VM. This is why such a set-up is popular among hardcore P2P downloaders.
The virtual machine can also be entirely encrypted. It can even be “hidden,” using programs such as VeraCrypt (see above).
Virtual machines emulate hardware. They run another whole OS on top of your “standard” OS. Using one therefore requires substantial overheads in terms of processing power and memory use. That said, Linux distros tend to be quite lightweight. This means that many modern computers can handle these overheads with minimal impact on perceived performance.
Popular VM software includes the free VirtualBox and VMWare Player, and the premium ($273.90) enterprise-level VMware Workstation. As noted above, VeraCrypt lets you encrypt an entire OS, or even hide its existence.
Give Whonix a Try
Whonix works inside a VirtualBox virtual machine. This ensures that DNS leaks are not possible, and that “not even malware with root privileges can find out the user’s real IP.”
It consists of two parts, the first of which acts as a Tor gateway (known as Whonix Gateway). The second (known as a Whonix Workstation), is on a completely isolated network. This routes all its connections through the Tor gateway.
This isolation of the workstation away from the internet connection (and all isolated from the host OS inside a VM), makes Whonix highly secure.
A Note on Windows 10
More than any other version of Microsoft’s OS, Windows 10 is a privacy nightmare. Even with all its data collection options disabled, Windows 10 continues to send a great deal of telemetry data back to Microsoft.
This situation has become even worse because of the recent Anniversary Update (vers. 1607) removed the option to disable Cortana. This is a service that collects a great deal of information about you in order to provide a highly personalized computing experience. Much like Google Now, it is very useful, but achieves this usefulness by invading your privacy significantly.
The best advice in terms of privacy is to avoid using Windows altogether. macOS is little better. Use Linux instead. You can always set up your system to dual-boot into either Linux or Windows and only use Windows when absolutely necessary. For example, when playing games, many of which only work in Windows.
If you really must use Windows, then a number of third party apps exist to help tighten up security and privacy much more than playing with Windows settings ever can. These typically get under the hood of Windows, adjusting registry settings and introducing firewall rules to prevent telemetry being sent to Microsoft.
They can be very effective. However, you are giving these programs direct access to the deepest workings of your OS. So let’s just hope that their developers are honest! Use of such apps is very much at your own risk.
I use W10 Privacy. It works well but is not open-source.
Password-protect Your BIOS
Full-disk encryption using VeraCrypt is a great way to physically secure your drives. But for this to be properly effective it is essential to set strong passwords in BIOS for both starting up and modifying the BIOS settings. It is also a good idea to prevent boot-up from any device other than your hard drive.
It has long been widely known that the Flash Player is an incredibly insecure piece of software (see also Flash Cookies). Many major players in the internet industry have made strong efforts to eradicate its use.
Apple products, for example, no longer support Flash (by default). In addition, YouTube videos are now served up using HTML5 rather than Flash.
The best policy is to disable Flash in your browser.
In Firefox, at the very least set Flash to “Ask to Activate,” so you have a choice about whether to load the Flash content.
If you really must view Flash content then I suggest doing so in a separate browser that you do not use for anything else.
Change DNS Servers and Secure Your DNS with DNSCrypt
We are used to typing domain names that are easy to understand and remember into our web browsers. But these domain names are not the “true” addresses of websites. The “true” address, as understood by a computer, is a set of numbers known as an IP address.
To translate domain names to IP addresses, for example, ProPrivacy.com to its IP address of 18.104.22.168, the Domain Name System (DNS) is used.
By default, this translation process is performed on your ISP’s DNS servers. This ensures your ISP has a record of all websites you visit.
Graffiti in Istanbul encouraging the use of Google Public DNS as an anti-censorship tactic during the government’s 2014 crackdown on Twitter and YouTube.
Fortunately, there are a number of free and secure public DNS servers, including OpenDNS and Comodo Secure DNS. I prefer the non-profit, decentralized, open, uncensored and democratic OpenNIC.
I recommend changing your system settings to use one of these instead of your ISP’s servers.
What SSL is to HTTP traffic (turning it into encrypted HTTPS traffic), DNSCrypt is to DNS traffic.
DNS was not built with security in mind, and it is vulnerable to a number of attacks. The most important of these is a “man-in-the-middle” attack known as DNS spoofing (or DNS cache poisoning). This is where the attacker intercepts and redirects a DNS request. This could, for example, be used to redirect a legitimate request for a banking service to a spoof website designed to collect victims’ account details and passwords.
The open-source DNSCrypt protocol solves this problem by encrypting your DNS requests. It also authenticates communications between your device and the DNS server.
DNSCrypt is available for most platforms (mobile devices must be rooted/jailbroken), but does require support from your chosen DNS server. This includes many OpenNIC options.
DNS and VPNs
This DNS translation process is usually performed by your ISP. When using a VPN, however, all DNS requests should be sent through your encrypted VPN tunnel. They are then handled by your VPN provider instead.
Using the right scripts, a website can determine which server resolved a DNS request directed to it. This will not allow it to pinpoint your exact real IP address but will allow it to determine your ISP (unless you have changed DNS servers, as outlined above).
This will foil attempts to geo-spoof your location, and allows police and the like to obtain your details from your ISP. ISPs keep records of these things, while good VPN providers do not keep logs.
Most VPN providers run their own dedicated DNS servers in order to perform this DNS translation task themselves. If using a good VPN, therefore, you do not need to change your DNS server or use DNSCrypt, as the DNS requests are encrypted by the VPN.
Unfortunately, DNS requests do not always get sent through the VPN tunnel as they are supposed to. This is known as a DNS leak.
Note that many VPN providers offer “DNS leak protection” as a feature of their custom software. These apps use firewall rules to route all internet traffic through the VPN tunnel, including DNS requests. They are usually very effective.
Use Secure Passwords
We have all been told this often enough to make us want to pull our hair out! Use long complex passwords, using combinations of standard letters, capitals, and numbers. And use a different such password for each service… Argh!
Given that many of us find remembering our own name in the morning a challenge, this kind of advice can be next to useless.
Fortunately, help is at hand!
Low Tech Solutions
Here are some ideas that will vastly improve the security of your passwords, and take almost no effort whatsoever to implement:
Insert a random space into your password – this simple measure greatly reduces the chance of anyone cracking your password. Not only does it introduce another mathematical variable into the equation, but most would-be crackers assume that passwords consist of one continuous word. They, therefore, concentrate their efforts in that direction.
Use a phrase as your password– even better, this method lets you add lots of spaces and use many words in an easy-to-remember manner. Instead of having “pancakes” as your password, you could have ‘I usually like 12 pancakes for breakfast’ instead.
Use Diceware – this is a method for creating strong passphrases. Individual words in the passphraseare generated randomly by rolling dice. This introduces a high degree of entropy into the result. Diceware passphrases are therefore well-regarded by cryptographers. The EFF has recently introduced a new expanded Diceware wordlist aimed at further improving Diceware passphrase results.
Use more than four numbers in your PIN– where possible, use more than four numbers for your PINs. As with adding an extra space to words, this makes the code mathematically much harder to break. Most crackers work on the assumption that only four numbers are used.
High Tech Solutions
Where mortals fear to tread, software developers jump in with both feet! There are a plethora of password management programs available. My pick of the bunch are:
KeePass (multi-platform) – this popular free and open-source (FOSS) password manager will generate complex passwords for you and store them behind strong encryption. A plethora of plugins allows for all sorts of customization and increased capability.
With plugins, you can use the Twofish cipher instead of the default AES, for example, while PassIFox and chromeIPass provide full browser integration. KeePass itself is Windows only, but KeepassX is an open-source clone for macOS and Linux, as are iKeePass for iOS and Keepass2Android for Android.
Sticky Password (Windows, macOS, Android, iOS) – is a great desktop password solution that impressed me with its ability to sync over Wi-Fi and support for so many browsers.
Its security measures also appear to be very tight. Given these solid foundations, the fact that Sticky Password works brilliantly on mobile devices (especially for Firefox mobile users) may be a compelling reason to choose this over its FOSS rival.
Social networking. Where you are encouraged to share every random thought that comes into your head, photos of what you had for dinner, and blow-by-blow accounts of your relationship meltdown.
It is the antithesis of concepts such as privacy and security.
Facebook is “worse” than Twitter in terms of privacy, as it sells every detail of your life to profiling-hungry advertisers. It also hands your private data over to the NSA. But all social networks are inherently about sharing information.
Meanwhile, all commercial networks make a profit from harvesting your personal details, likes, dislike, places you visit, things you talk about, people you hang out with (and what they like, dislike, etc.), and then selling them.
By far the best way to maintain your privacy on social networks is to avoid them altogether. Delete all your existing accounts!
This can be tricky. It is unlikely, for example, that you will be able to remove all traces of your presence on Facebook. Even worse is that these social networks are increasingly where we chat, share photos and otherwise interact with our friends.
They are a primary reason for using the internet and play a central role in our social lives. In short, we aren’t willing to give them up.
Below, then, are some ideas for trying to keep a modicum of privacy when social networking.
If there are things you don’t want (or that shouldn’t be) made public, don’t post details about them on Facebook! Once posted, it is very difficult to retract anything you have said. Especially if it has been re-posted (or re-tweeted).
Keep private conversationsprivate
It is all too common for people to discuss intimate details of a planned dinner date, or conversely, to have personal rows, using public channels. Make use of Message (Facebook) and DM (Twitter) instead.
This won’t hide your conversations from advertisers, the law, or the NSA, but it will keep potentially embarrassing interactions away from friends and loved ones. They probably really don’t want to hear certain things, anyway!
There is little to stop you from using a false name. In fact, given employers almost routinely check their staff’s (and potential staff’s) Facebook pages, using at least two aliases is almost a must. Opt for a sensible one with your real name, which is designed to make you look good to employers, and another where friends can post wildly drunken pictures of you.
Remember that it is not just names that you can lie about. You can also happily fib about your date of birth, interests, gender, where you live, or anything else that will put advertisers and other trackers off the scent.
On a more serious note, bloggers living under repressive regimes should always use aliases (together with IP cloaking measures such as a VPN) when publishing posts that may threaten their life or liberty.
Keep checking your privacy settings
Facebook is notorious for continually changing the way its privacy settings work. It also makes its privacy policies as opaque as possible. It is worth regularly checking the privacy settings on all social networks to make sure they are as tight as possible.
Ensure that posts and photos are only shared with Friends, for example, not Friends of Friends or “Public.” In Facebook, ensure that “Review posts friends tag you in before they appear on your timeline” (under Privacy Settings -> Timeline and Tagging) is set to “On”. This can help limit the damage “friends” are able do to your profile.
Avoid All Five Eyes-based Services
The Five Eyes (FVEY) spying alliance includes Australia, Canada, New Zealand, the United Kingdom, and the United States. Edward Snowden has described it as a “supra-national intelligence organization that doesn’t answer to the known laws of its own countries.”
Intelligence is freely shared between security organizations of member countries, a practice that is used to evade legal restrictions on spying on their own citizens. It is, therefore, a very good idea to avoid all dealings with FVEY-based companies.
Indeed, there is a strong argument that you should avoid dealing with any company based in a country belonging to the wider Fourteen Eyes alliance.
The US and NSA Spying
The scope of the NSA’s PRISM spying program is staggering. Edward Snowden’s revelations have demonstrated it has the power to co-opt any US-based company. This includes monitoring information relating to non-US citizens and pretty much anybody else in the world. It also includes monitoring all internet traffic that passes through the US’s internet backbone.
Other countries’ governments seem desperate to increase their own control over their citizens’ data. Nothing, however, matches the scale, sophistication, or reach of PRISM. This includes China’s attempts at internet surveillance.
Suggesting that every US-based company may be complicit in handing every user’s personal information over to a secretive and largely unaccountable spying organization might sound the stuff of paranoid science-fiction fantasy. As recent events have proved, however, this is terrifyingly close to the truth…
Note also that due to provisions in both the Patriot Act and the Foreign Intelligence Surveillance Act (FISA), US companies must hand over users’ data. This applies even if that user is a non-US citizen, and the data has never been stored in the US.
The UK and GCHQ Spying
The UK’s GCHQ is in bed with the NSA. It also carries out some particularly heinous and ambitious spying projects of its own. According to Edward Snowden, “they [GCHQ] are worse than the US.”
This already bad situation is about to worsen. The impending Investigatory Powers Bill (IPB) “formalizes” this covert spying into law. It also expands the UK government’s surveillance capabilities to a terrifying degree with very little in the way of meaningful oversight.
I therefore strongly recommend avoiding all companies and services based in the UK.
Is Privacy Worth it?
This question is worth considering. Almost all the measures outlined above mark you out for special attention by the likes of the NSA. They also add extra layers of complexity and effort to everyday tasks.
Indeed, much of the cool functionality of new web-based services relies on knowing a lot about you! Google Now is an excellent case in point. An “intelligent personal assistant,” this software’s ability to anticipate what information you require is uncanny.
It can, for example, remind you that you need to leave the office to catch the bus “now” if you want to get home at your usual time. It will also provide navigation to the nearest bus stop, and alternative timetables should you miss the bus.
Some of the most exciting and interesting developments in human-computer interaction rely on a full-scale invasion of privacy. To box yourself in with encryption and other privacy protection methods is to reject the possibilities afforded by these new technologies.
I mainly pose the question ‘is privacy worth it’ as food for thought. Privacy comes with a cost. It is worth thinking about what compromises you are willing to make, and how far you will go, to protect it.
The importance of privacy
In my view, privacy is vitally important. Everyone has a right not to have almost every aspect of their lives recorded, examined and then judged or exploited (depending on who is doing the recording). However maintaining privacy is not easy, and can never be completely guaranteed in the modern world.
What most of us probably want is the ability to share what we want with our friends and with services that improve our lives, without worrying about this information being shared, dissected and used to profile us.
If more people make efforts to improve their privacy, it will make government agencies’ and advertisers’ jobs more difficult. Perhaps even to the point that it could force a change of approach.
It may take a bit of effort, but it is entirely possible, and not too cumbersome, to take steps that greatly improve your privacy while online. Many experts differ on what is key to protect your online privacy in 2019, so it’s important to remember that nothing is foolproof. However, that is no reason to make things easy for those who would invade aspects of your life that should rightfully be yours and yours alone.
Privacy is a precious but endangered commodity. By implementing at least some of the ideas I have covered in this guide, you not only help to protect your own privacy but also make a valuable contribution to conserving it for everyone. This article was originally posted on ProPrivacy.
Grade school might tell you anything is possible. Complete and total decentralization, however, is not– at least sustainably.
The premise that decentralization is impossible hinges on the fact that decentralization “experiments” such as Bitcoin have approached towards degrees of centralization.
For example, Bitcoin’s Proof-of-Work (PoW) mechanism relies on many different nodes to “mine,” or verify and facilitate transactions. These miners are rewarded a portion of transaction fees and a shot at winning the Bitcoin block reward, which is currently 12.5 bitcoins or about $115,000– a not insignificant prize dished out about 144 times per day.
This has incentivized the creation of “mining pools,” or a collection of nodes that work together and divide the prize among the operators. Mining pools have taken what would otherwise be a decentralized utopia and coalesced a degree of centralization. The current state of Bitcoin mining is far from a single entity controlling the network with a 51% lion’s share, but it still highlights the downfalls of decentralization.
There are cryptocurrencies that utilize different consensus mechanisms such as Proof-of-Stake and Delegated-Proof-of-Stake, and they manage to address some of Bitcoin’s drawbacks (such as high energy costs to mine), but they still leave the glaring decentralization issue.
The core tenet of decentralization, at least among the common blockchain ethos, is that decentralization insulates the network from tyranny and corruption, a duo that has plagued human governance and economics since the dawn of organized civilization.
The same concepts that power Adam Smith’s “free market” philosophy that underpins modern economics encourage decentralization as long as every party is properly incentivized (mining rewards) and has access to somewhat similar resources (electricity, reasonable utility costs, etc) as the next.
The glory of today’s blockchains isn’t so much derived from the idea of a utopian decentralization, but more so as a rebellion against the status quo of centralized banks dictating the economy, individuals being tethered to the success of their government, and the ability for large tech corporations and banks to shut down financial accounts at will.
There are several degrees of “decentralization”, and even though not “perfect,” even smaller decentralization improvements can be disruptive to the existing “centralized” status quo.
The goal is to figure out which areas will be impacted first; and which are the top use cases given the infrastructure’s properties (security, decentralization, scalability).
For example, think of the idea of a decentralized “Uber,” a popular thought experiment in the blockchain entrepreneurial community. When someone needs a car to pick them up, what would they prefer: a nearly immediate connection to a driver courtesy of an efficient centralized server, or waiting for minutes and potentially hours for the transaction to be verified and processed?
Although the push towards absolute decentralization is unfeasible, and in reality, decentralization benefits vary across use cases. Simply put, we don’t need decentralization for everything– centralization tends to work just fine for most things, and it shouldn’t be viewed as an enemy of what the blockchain movement seeks to accomplish.
Ultimately, blockchain is simply just another technology to be leveraged to solve certain problems.
Once upon a time, centralization was the solution to the disorganized chaos of the unpredictable. Our ancestors who lived in groups tended to live longer and better lives than individuals not in groups.
These groups evolved into tribes that utilizes some form of centralization (leadership, hierarchy, enforced order) that protected its members and utilized resources more efficiently. Eventually, tribes expanded and, however indirectly, ended up becoming countries that function using the same centralized mechanics.
Decentralization uses the evolution of the Internet and technological infrastructure to create systems that don’t require a centralized entity’s direct oversight. However, decentralized entities can only function within the scope of a centralized society.
Today’s projects are able to enjoy both the benefits of existing within the confines of a centralized society and the possibilities enabled by blockchain and the Internet. While full decentralization might be impossible, there’s no reason to fuss. This article was originally posted on Albaron Ventures.
HTTP Proxies are an essential component when using the internet day to day – load balancers, routers, content accelerators, content protection systems, these are all simple examples of web proxies and they all act as intermediaries to send your HTTP requests where they need to go, anonymize requests, handle routing of traffic, speed up the net, and many other uses.
When it comes down to it, most web proxies fall into two camps:
Reverse Proxies – A reverse proxy is usually an internal-facing proxy used as a front-end to control and protect access to servers on a private network. A reverse proxy commonly also performs tasks such as load-balancing, authentication, decryption or caching.
Forward Proxies – A forward proxy is an Internet-facing proxy used to retrieve from a wide range of sources.
Let’s look at these two types in more detail
What is a Reverse Proxy
You’re probably using a reverse proxy in order to view this content. When you make a request to the server that serves this blog post it will pass through a load balancer. This load balancer is a type of reverse proxy. Reverse proxies will sit in front of one of more servers and distribute requests to these servers. Common examples of these would be Nginx’s proxy_pass module, HAProxy, Squid, and AWS’ ELB.
Reverse proxy receives a request from a client on the Internet and retrieves the requested resources from one of more servers that sit behind it. To the client there is no knowledge required of the servers (often called upstream servers) that serve the original content and they can be changed as required without any outside knowledge. The reverse proxy handles that information.
As part of this handling of the request the reverse proxy will often provide additional value such as terminating SSL, performing authentication and/or authorization, accelerating (caching or compressing) content or rewriting the request and/or response.
The word “reverse” in the name reverse proxy has no special meaning, it’s just used as the inverse of forward proxy which actually has a meaning as you’ll read shortly.
What is a Forward Proxy
Forward proxies are commonly used to control traffic leaving networks. When you send a request via the proxy it will “forward” your request on to the requested website, hence the name “Forward Proxy”.
A common job of a forward proxy is to both control access to the internet by inspecting certain attributes as the request passes through it. If you’re on a corporate network and they prohibit you from going to a social network such as facebook.com, this will often be the job of a forward proxy. The forward proxy is able to inspect the host of the request and, since on a corporate network traffic is often mandated to flow through the proxy, it will deny any requests that use the prohibited host.
Another common use-case for a forward proxy is to anonymize where the request originally came from.
Forward proxy sits in between requests from the user to the internet. As such, when the forward proxy sends a request to a host the host computer will see the IP address of the forward proxy, not of the user. This is commonly used to perform IP anonymization and is a major feature of VPNs.
Layer 7 versus layer 3
Most of the time ‘proxy’ refers to a layer-7 application on the OSI reference model. However, another way of proxying is through layer-3 and is known as Network Address Translation (NAT). The difference between these two proxy technologies is the layer in which they operate, and the procedure to configuring the proxy clients and proxy servers.
Layer-7 proxies are more suitable if you’re inspecting the content of the payload to perform routing or otherwise manipulating the payload.
As Bitcoin and other digital assets continue to grow in adoption and popularity, a common topic for discussion is whether the U.S. government, or any government for that matter, can exert control of its use. There are two core issues that lay the foundation of the Bitcoin regulation debate:
The digital assets pose a macro-economic risk. Bitcoin and other cryptocurrencies can act as surrogates for an international currency, which throws global economics a curveball. For example, countries such as Russia, China, Venezuela, and Iran have all explored using digital currency to circumvent United States sanctions, which puts the US government at risk of losing its global authority.
International politics and economics are a very delicate issue, and often sanctions are used in place of military boots on the ground, arguably making the world a safer place.
The micro risks enabled by cryptocurrency weigh heavily in aggregate. One of the most attractive features of Bitcoin and other digital assets is that one can send anywhere
between a few pennies-worth to billions of dollars of Bitcoin anywhere in the world at any time for a negligible fee (currently around $0.04 to $0.20 depending on the urgency.)
However, in the hands of malicious parties, this could be very dangerous. The illicit activities inherently supported by a global decentralized currency run the gamut: terrorist funding, selling and buying illegal drugs, ordering assassinations, dodging taxes, laundering money, and so on.
Can Bitcoin Even Be Regulated?
Before diving deeper, it’s worth asking whether Bitcoin can be regulated in the first place.
The cryptocurrency was built with the primary purpose of being decentralized and distributed– two very important qualities that could make or break Bitcoin’s regulation.
By being decentralized, Bitcoin doesn’t have a single controlling entity. The control of Bitcoin is shared among several independent entities all over the world, making it nearly impossible for a single entity to wrangle full control over the network and manipulate it as they please.
By being distributed, Bitcoin exists at many different locations at the same time. This makes it very difficult for a single regulatory power to enforce its will across borders. This means that a government or other third party can’t technically raid an office and shut anything down.
That being said, there are several chokepoints that could severely hinder Bitcoin’s adoption and use.
1. Targeting centralized entities: exchanges and wallets
A logical first move is to regulate the fiat onramps (exchanges) , which the United States government has finally been getting around to. In cryptocurrency’s nascent years, cryptocurrency exchanges didn’t require much input or approval from regulatory authorities to run. However, the government started stepping in when cryptocurrency starting hitting the mainstream.
The SEC, FinCEN (Financial Crimes Enforcement Network), and CFTChave all played a role in pushing Know Your Customer (KYC) protocols and Anti-Money Laundering (AML) policies across all exchanges operating within U.S borders.
Cryptocurrency exchanges have no options but to adhere to whatever the U.S. government wants. The vast majority of cryptocurrency users rely on some cryptocurrency exchange to utilize their cryptocurrency, so they will automatically bend to exchange-imposed regulation.
Regulators might not be able to shut down the underlying technology that powers Bitcoin, but they can completely wreck the user experience for the great majority of cryptocurrency users, which serves as enough of an impediment to diminish the use of cryptocurrency for most.
The government can also target individual cryptocurrency users. Contrary to popular opinion, Bitcoin (and even some privacy coins) aren’t anonymous. An argument can be made that Bitcoin is even easier to track than fiat because of its public, transparent ledger.
Combined with every cryptocurrency exchange’s willingness to work with U.S. authorities, a federal task force could easily track money sent and received from certain addresses and pinpoint the actual individual with it. Companies such as Elliptic and Chainalysis have already created solid partnerships with law enforcement in many countries to track down illicit cryptocurrency uses and reveals the identities behind the transactions.
Beyond that, we dive into the dark web and more professional illicit cryptocurrency usage. Although trickier, the government likely has enough cyber firepower to snipe out the majority of cryptocurrency-related cybercrime. In fact, coin mixers (cryptoMixer.io), coin swap services (ShapeShift) and P2P bitcoin transactions (localbitcoins.com) have been investigated for several years now and most of them have had to add KYC and adhere to strict AML laws.
Ultimately, it’s going to take a lot to enforce any sort of significant global regulation on Bitcoin, with the most important factor being a centralization and consensus of opinion. The majority of the U.S. regulatory alphabet agencies fall into the same camp of “protect the good guys, stop the bad guys”, but there isn’t really a single individual piece of guidance to follow. Currently, cryptocurrencies are regulated in the US by several institutions: CFTC, SEC, IRS, making it difficult to create overarching regulatory guidelines.
In short, yes– Bitcoin can be regulated. In fact, its regulation has already started with the fiat onramps and adherence to strict KYC & AML laws. While in countries such as Ecuador, Bolivia, Egypt and Morocco Bitcoin ownership is illegal, in the US, it would take some bending of the moral fabric of the Constitution in order for cryptocurrency ownership rights to be infringed.
However, it cannot be shut down. There are still ways to buy, sell, and trade Bitcoin P2P, without a centralized exchange. It would take an enormous effort by any government to completely uproot something as decentralized as Bitcoin, but that future seems more dystopian than tangible. You can read the full article on albaronventures.com
The main benefit of an Industry 4.0 system is the ability to create decentralized decision making without relying on the traditional type of physical approach.
There are a lot of different industries that can benefit from Industry 4.0. However, it has shown the best results for:
virtual and augmented reality
specific types of goods
autonomous or remote-controlled vehicles
In this article, I will discuss some practical steps that will help you implement Industry 4.0.
But let’s move forward one step at a time and see how it all started. Read on!
The first mention of the term Industry 4.0
If you think that the birth of Industry 4.0 has to do with the private sector, you would be dead wrong.
The term was initially introduced by the German government in their attempt to promote digitisation. Their version was “Industrie 4.0” or simply “I4”.
It was used during the Hannover Fair in 2011. Something that initially started as a great idea quickly turned into action: The German government created a working group in 2012 that was meant to create guidelines which would then be implemented at a federal level.
The working group was spearheaded by Siegfried Dais (Robert Bosch GmbH) and Henning Kagermann (German Academy of Science and Engineering). In April 2013, the project was ready.
Given that the concept is relatively new, it is still being modified and improved. There is a lot of discussion as to how to further polish it so it can benefit everybody.
There are also a lot of economic, political and social challenges that need to be addressed which we will discuss in the following chapters, so make sure you stick around!
Industry 4.0’s impact on the market
Not everyone wanted to implement I4.
While the concept works amazingly for bigger systems, it might not be ideal for smaller enterprises. Its effect will also vary from industry to industry.
Nevertheless, Industry 4.0 can bring amazing results to the companies that implement it.
There are also those who are unwilling to make the shift due to preconceptions or conservatism. However, as we go forward, it seems that Industry 4.0 will not only represent a small technical benefit, but it will become a necessary requirement to stay in business.
We recognize 6 dimensions of Industry 4.0:
Strategy and Business Model (Creating the right strategy that will benefit from this concept)
Technology and Systems (Using technology for optimal results)
Governance and Risk Management (Getting the most from the concept while avoiding risks)
People (Educating and leading employees so they can adapt to the new industrial norm)
Operational Excellence (Gaining a competitive edge through technology)
Depending on the author, these dimensions may vary.
Still, they will almost always refer to processes that are otherwise common for traditional businesses.
What are the 3 main end benefits of Industry 4.0?
If we exclude the initial implementation costs, there are very few drawbacks to I4. It is a concept that keeps on giving and the sooner you implement it, the sooner you will see its benefits.
The 9 Pillars of Industry 4.0
Industry 4.0 is a very complex subject.
So, it is not surprising that it’s based on 9 main pillars (you can also call them elements).
Here they are:
1 Big Data and Analytics
Big data is crucial for any corporation. It refers to datasets that have a major impact on how a company forms its strategies and runs its day-to-day operations.
By improving this concept, the company is able to achieve a competitive edge. Like most business concepts, it is based on monitoring and measuring results, and then finding the right solutions. The main focus of analytics lies in dataset analysis.
Based on this analysis, the company can learn more about customer preferences and current market trends. Big data can also be used for risk mitigation. Bosch was able to use big data as a way of digitally transforming their company.
They were able to connect their machinery in order to have a better oversight of their manufacturing. With this, not only are they able to monitor the performance of individual machines and systems but they are also able to discover if there are any issues.
This way, they were able to maximize the output of their machinery which led to a 10% overall increase in productivity. At the same time, Bosch was able to improve their customer satisfaction rate. For example, in this case study, scientists were able to create a risk map regarding Rift Valley Fever:
Robots are not a new concept; they have existed for quite a while. However, we were never able to implement them on a larger scale and furthermore, robots of the past weren’t as autonomous as the ones we can buy today. When it comes to industrial use, robots are able to solve certain tasks that are beyond human reach.
Today, it is much easier to implement them as a part of the production process.
But that doesn’t mean that human labour will become extinct, as a worker’s input is still necessary to allow a robot to perform certain tasks. Robots can be utilized in various ways, leading to production, logistics, and distribution improvements.
Whether you like it or not, you need to use robots in order to maintain a competitive edge.
Fetch Robotics, in particular, managed to benefit from this practice. The company relies on their own line of robots, which they named Autonomous Mobile Robots.
They use them for inventory management and in particular, locating and moving things. The best thing yet is the fact that these robots are able to work by themselves, while also learning along the way. Because of them, Fetch Robotics managed to reduce their order cycle time by 50%.
However, this is only the tip of the iceberg as we don’t yet understand the full potential of autonomous robots. According to this study, today there are approximately 2.9 million robots utilized in various industries:
Simulation tools are very important for their support role. They are able to self-configure enabling effective shop-floor management. Simulations are crucial nowadays as they allow companies to make predictions regarding their potential actions.
In other words, by making simulations of certain operational activities, you can learn where things may go wrong and how to prepare for such occurrences.
Alternatively, you can adjust for them beforehand thus increasing productivity or reducing costs. You can also use simulation tools to increase the productivity of your workforce according to this study, as employees are quickly getting results of their actions and can change their behavior accordingly:
Let’s start by explaining what these two terms mean. Vertical integration is used to describe adaptable systems within a production plant. On the flipside, horizontal integration deals with the integration of partners within the SCs.
During integration, the network will gather big data, which will allow for better performance. When the network gathers enough data, it will upload it onto cloud. This is where a framework will be created.
By utilizing cloud-based systems, vertical elements can be integrated with each other through the same platform. Integration can be used for almost anything. Not only is it versatile, but it is also very profitable:
The Internet of Things is a crucial element of this new industrial revolution. It allows devices to interact with each other, which is crucial for big systems. Each device will collect data and send it to the internet where it will be further processed.
The Industrial Internet of Things is highly dependent on a hierarchy. Lower-tier devices will gather data, whereas highly-complex devices, such as robots and medical devices, will make decisions based on that data. The Industrial IoT is important as it brings more flexibility to organizations together with more responsiveness.
Like most other technical aspects of the business, it will bring a competitive advantage to all those companies that implemented it early on. BJC HealthCare is perhaps one of the best examples of how the Industrial Internet of Things can be implemented to your advantage. As the name implies, this company operates within the health-care industry.
They rely heavily on radio frequency identification for medical supply management. With this technology, they are able to track and identify supplies without physical contact. This proved to be especially important for resupplying and tracking item expiration dates.
Previously, this job was done by people, which took forever. Also, the chance of human error was high. The company has managed to save a lot of money by having a 23% smaller inventory. According to several studies, it is predicted to generate $15 trillion of global GDP by 2030:
Cloud is a crucial element that ties various pillars together. It allows transfer and storing of data as well as its further use. Most of a company’s resources will be stored there in a virtual form. The Cloud is based on three separate models: SaaS (software as a service), IaaS (infrastructure as a service), and PaaS (platform as a service).
Nowadays, clouds are so common that you don’t have to be a big corporation in order to rely on them. Even individual users can benefit from Google Cloud for example. Cloud storage is also very eco-friendly. Among other things, clouds are very popular in the automotive industry. One of the best examples of this is how Volkswagen uses them.
There are lots of ways a company can benefit from this technology, such as: having smart home connectivity, better maintenance service, regular updates, personal assistants and so on. Such systems and technology will be increasingly important in future as car makers are doing their best to create autonomous vehicles.
Large amounts of data will have to be stored and transferred which is why clouds are such a good solution to this problem. They can reduce energy consumption by 70%, making a company not only considerate towards the environment but also more financially sound:
Customization has become more and more important as the years go by. Companies that are successfully meeting their clients’ needs usually have a better position in the market. This is also how we came to additive manufacturing.
Most people don’t even know what additive manufacturing refers to. This is because there is a much more common term for it: 3D printing. Besides the fact that 3D printing allows you to create customized items, it also helps you avoid mass production, which in turn leads to a big inventory and unsellable products.
On top of it all, due to its nature, 3D printing helps you use less material, which is both economically and environmentally sound. Additive manufacturing can take your whole organization to the next level. Fast Radius is one of the best examples of that. It is regarded as one of the top 9 smartest factories in the world.
The company has facilities all over the world which allows them to create customized products and quickly deliver them to almost any spot on the globe. But there is much more to this organization than simple additive manufacturing; they’ve taken the process and made it their own.
Fast Radius has developed their own platform that works well for their particular business. It is able to collect data from every part design that is stored and manufactured in the Fast Radius virtual warehouse. One of the studies found out that the component’s use saves 63% of relevant energy and carbon dioxide emissions over the course of the product’s lifetime:
Augmented reality is a relatively new concept but it has already become an integral part of Industry 4.0. With it, you are able to create a false sensation and put a person into a different place.
It is a completely different type of interaction compared to anything we’ve known so far. AR creates a link between a human and a machine, putting the user in a different reality (hence the name). While it doesn’t impact business processes as much as some other pillars, it has still become a part of them.
AR allows designers to experience a product’s design before completing the project. General Electric was always at the forefront of economy and industrial innovation and they have proven this by involving themselves with augmented reality.
They did a pilot project during which the productivity of workers using smart wearables increased by up to 11%, compared to the previous period.
Ultimately, this approach could offer a tremendous potential to minimize errors, cut down on costs and improve product quality. Volkswagen has used it successfully for comparing calculated and actual crash test imagery:
Most business is shifting online. Whether we are talking about purchasing, services, payments, or storing data, both companies and customers are highly dependent on the internet. This is why cybersecurity has become an integral part of I4.
In the last few years, the focus of cybersecurity is not only to help users and companies, but it has also become a good way to prevent cyber-crime and terrorism. Besides the fact that cybersecurity helps you protect data, these systems are also able to notice certain harmful patterns and prevent crime and terrorism before they happen.
As such, it ensures everything stays in place as it should. In fact, cyber security is so important that 66% of respondents believe that data breaches or cybersecurity exploits will seriously diminish their organization’s shareholder value according to this study:
Like every industrial revolution, there are lots of things that need to be addressed.
Due to the sheer size and global nature of I4, there are many more challenges compared to any previous industrial revolution which we’ve had in the past.
At this point, some of these challenges may seem insurmountable. Let’s see what’s ahead of Industry 4.0.
The very implementation of these systems will be too high for certain companies. Besides the initial costs, you also have to consider maintenance costs over time.
Adapting the model
As previously mentioned, I4 is especially useful for bigger corporations, but it provides different results for smaller organizations. It may prove difficult creating a model that will work for all types of industry.
While the costs may be really high, that doesn’t mean that I4 would lead to increased productivity in every situation. This is one of the reasons why it might not be economically feasible.
Lack of regulation
Industry 4.0 is meant to have a global character. This would make it really hard to create a framework that can be applied to all countries and systems. To make things even worse, many countries haven’t even started creating the framework which would promote I4 development.
The same way security is an issue for the private sector, it can be an issue for the public sector as well. If we consider the fact that I4 will be regulated by governments, and there is a high chance that the public sector will also start implementing the system, it will pose the same security threats to them as it does to private enterprises.
Other legal issues
Due to the fact that this whole area lacks regulation, it is easy to see that legal issues are not only a possibility but are to be expected. This would lead to various losses on different sides.
Impact on privacy
Like with everything else digital, you have to ask yourself how much will I4 affect our privacy. The more money is invested in concepts such as this, the more people will be exposed to its negative sides.
IT people losing jobs
The whole point of Industry 4.0 is to make things easier and more productive. This usually means more automation, which in turn will lead to job losses. If the whole system is expanded quickly, this will not just send shockwaves throughout the IT industry, but will affect the wider social fabric.
Disconnect with customers
Due to all of these previously mentioned factors, it is easy to see how I4 might actually have a negative impact on consumers. Ethical business models are crucial for modern companies and those who go against these policies will lose their consumers.
Trouble accepting the concept
Although I4 should represent progress compared to the system we currently have, not everyone looks at it that way. There are a lot of stakeholders who will question both the economic and the social side of it and this resistance can lead to further issues during implementation.
Challenges at the company level
Changes to the production process
Having all these systems and improved production sounds great, but that also means that current processes and machines might become obsolete. In other words, this might make certain systems, software, and machines unusable without you maximizing their potential, which in turn will lead to business losses.
Potential IT issues
Given that everything will be interconnected and managed through one system, this means that even small issues can lead to a complete breakdown. Minor IT issues will not only affect one department but the company as a whole.
Lack of qualifications
The transition period is always rocky, no matter what you are doing. In this particular case, it will be reflected through a lack of qualified employees. This would lead to further losses and a halt in production.
Protecting industrial secrets
Due to the fact that all the data will be online, companies will become more susceptible to corporate espionage and loss of business secrets and patents.
There are many challenges ahead of us, but the benefits are most definitely worth it!
How to prepare your company for Industry 4.0?
As we promised, we will now go through the preparation process and what you need to do in order to introduce I4 to your organization. Let’s dive right in!
1 What are your actual needs?
As already mentioned, not every company will profit from industry 4.0 in the same way. While certain businesses will see a significant boost in production and improvement of other industrial processes, others will take a step back. In fact, implementing I4 principles may even become a financial burden. So, make sure that you really need this and that you can maintain it financially.
2 How would it work in your particular case?
Let’s say additive manufacturing works well within your industry and your competition has benefited greatly from it. How would it work for you? In most cases, it is not only about implementation of processes, but when and where you will implement these principles.
3 Can you create a solid ‘Industry 4.0’ business strategy?
Whether you’re a big corporation or a smaller company, it is very important to create a proper strategy before implementing I4. There are lots of different risks involved in the whole process and the best you can do is to at least mitigate some of them. While you can’t do much about the legal or political aspects, you can definitely influence the financial and human factors. Each pillar needs to be addressed separately and your decisions regarding suppliers and service providers will go a long way in determining the overall success of the whole project. You should especially put emphasis on modular technological solutions.
4 How quickly and efficiently can you educate your staff?
Although Industry 4.0 sounds like a self-sustaining system, it is everything but that. Human labour is very important for maintaining some of the systems and working within these new rules. Employees will play a big role in how you’re able to adapt to changes and how quickly you can transition to this new model. In fact, you might even consider starting the training now, as you’re preparing your general I4 strategy. This will save you a lot of time!
5 How quickly can you adapt and change?
You should never take Industry 4.0 lightly. Something that is based on the 9 different pillars will always pose certain challenges. Small issues with certain aspects of I4 can bring the whole system down. This is why it is very important for your company to be adaptable. You need to be able to change processes on the fly and improve procedures. If you think that your job is done once you implement I4, you’re sorely mistaken.
6 Can you take a financial hit?
Lastly, you will also have to consider your exit strategy. No matter how much we may praise Industry 4.0, the sad truth is that some companies will not benefit from it. In fact, the whole process may become a big loss. If this is the case, you have to ask yourself in advance: “Am I prepared for this and can I take the financial hit that goes with it?”
How to prepare technical communications for I4?
Industry 4.0 will also affect the way we write and deliver information.
When implementing Industry 4.0, there are lots of things a company needs to do on the back-end. Not only do you have to address technical aspects, introduction of new software and systems, reconsider compliance processes, but you also need proper documentation, manuals and other content that will back up the whole process.
There are a few things you can do to adapt your technical writing to Industry 4.0.
First of all, as I4 products are often connected to the internet, a paper user manual is not the best way to provide the user with information. I4 enables you to create and deliver more intelligent information.
So what is Intelligent Information?
Intelligent information is, amongst others:
efficient use of content processes, people, and technology.
content that is scalable and can be reused.
content that’s personalized, so it delivers most value for the consumer of the content: the right content is delivered to the right person at the right time, regardless of the device or channel that the consumer uses
Software companies that develop technical authoring tools, such as MadCap, Adobe, Author-it, Schema, Fischer, Oxygen etc develop tools that enable you to publish the same content to several output formats, such as PDF and online.
This is done single source. In other words: the same content is being reused for several output formats.
When you create single source content, you can not only use the same content for several mediums, but you can also reuse the content for several related products.
Principles that software companies build on and that a professional technical writer should embrace to create ‘smart’ content that can be reused is topic based (structured) writing or DITA.
Also, the newly published 82079 standard for information for use integrated much of these principles.
gives requirements for the information management process;
describes how you can use people efficiently by describing professional competencies
describes how you can use technology (media and format) in order to digitise your information for use;
gives requirements on structuring information. This enables you to scale, automate and reuse content;
states that information should be delivered individualised, if possible.
Although there are many unknown variables and still quite a lot of challenges to overcome, which makes it really hard to assess whether or not your company can pull it off, investigating the possibilities of Industry 4.0 is an absolute must.
In order to make more or fewer sound decisions, you should first of all determine which pillar(s) might be of most importance for your industry or just your company.
Also, identify the risks for your company and how to mitigate them.
In the end, you know your business best,, what direction you want to go to and what Industry 4.0 related business decisions serves that best.
The internet has become a staple of modern life. We use it to shop for what we need and want, talk to friends and family, run businesses, meet new people, watch movies and TV, and pretty much everything else you can think of. In short, it has given birth to a new age in human history. The last example of this type of widespread change was the industrial revolution. But unlike the digital revolution, which took place over less than a half a century, the transition to industrialized societies took hundreds of years. However, this rapid change is just further proof of how much the internet is reshaping the way we live. The internet started in the 1950s as a small, government-funded project. But have you ever wondered how these humble beginnings led to worldwide connectivity? If you have, read on for a detailed summary of the history of the internet.
Internet Statistics in 2019
Timeline of the Internet
The invention of the internet took nearly 50 years and the hard work of countless individuals. Here’s a snapshot of how we got to where we are today:
Part 1: The Early Years of the Internet
When most of us think of the early years of the internet, we tend to think of the 1990s. But this period was when the internet went mainstream, not when it was invented. In reality, the internet had been in development since the 1950s, although its early form was a mere shell of what it would eventually become.
Wide Area Networking and ARPA (1950s and 1960s)
For the internet to become popular, we first needed computers, and while the first computers date back to the 17th and even the 16th century, the first digital, programmable computers broke onto the scene in the 1940s. Throughout the 1950s, computer scientists began connecting computers in the same building, giving birth to Local Area Networks (LANs.), and instilling people with the idea that would later morph into the internet.
In 1958, the United States Department of Defense Secretary Neil McElroy signed Department of Defense Directive 5105.15 to create the Advanced Research Projects Agency (ARPA), which, due to the tensions produced during the Cold War, was tasked with creating a system of long-distance communications that did not rely on telephone lines and wires, which were susceptible to attack.
However, it wasn’t until 1962 that J.C.R. Licklidler, an MIT scientist and ARPA employee, and Welden Clark published their paper “On-line man-computer communication.” This paper, which was really a series of memos, introduced the “Galactic Network” concept, which was the idea that there could be a network of connected computers that would allow people to access information from anywhere at anytime. Eventually, the idea of a “galactic network” became known as a Wide Area Network, and the race to create this network became the race to create the internet.
Because of how closely this idea resembles the internet today, some have chosen to name Licklidler as the “father of the internet,” although the actual creation and implementation of this network resulted from the hard work of many hundreds if not thousands of people.
The First Networks and Packet Switching (1960s)
To build the internet, researchers were working on ways to connect computers and also make them communicate with one another, and in 1965, MIT researcher Lawrence Roberts and Thomas Merrill connected a computer in Massachusetts to one in California using a low-speed dial-up telephone line. This connection is credited as being the first-ever Wide Area Network (WAN). However, while the two men were able to make the computers talk to one another, it was immediately obvious that the telephone system used at the time was not capable of reliably handling communications between two computers, confirming the need to develop a technology known as packet switching to facilitate a faster and more reliable transmission of data.
In 1966, Roberts was hired by Robert Taylor, the new head of ARPA (which had been renamed DARPA), to realize Licklider’s vision of creating a “galactic network.” By 1969, the early framework of the network, named ARPAnet, had been built, and researchers were able to link one computer in Stanford and one in UCLA and communicate using packet switching, although messaging was primitive. Shortly thereafter, also in 1969, computers at the University of Utah and the University of California, Santa Barbara were added to the network. Over time, the ARPAnet would grow, and it served as the foundation for the internet we have today.
However, there were other versions, such as the Merit Network from the University of Michigan and the Robert CYCLADES network, which was developed in France. Also, Donald Davies and Roger Scantlebury of the National Physics Laboratory (NPL) in the United Kingdom were developing a similar network based on packet switching, and there were countless other versions of the internet in development in various research labs around the world. In the end, the combined work of these researchers helped produce the first versions of the internet.
Internet Protocol Suite (1970s)
Throughout the rest of the 1960s and into the early 1970s, different academic communities and research disciplines, desiring to have better communication amongst their members, developed their own computer networks. This meant the internet was not only growing, but that there were also countless versions of the internet that existed independently of one another.
Seeing the potential of having so many different computers connected over one network, researchers, specifically Robert Kahn from DARPA and Vinton Cerf from Stanford University, began to look at a way to connect the various networks, and what they came up with is the Internet Protocol Suite, which is made up of the Transmission Control Protocol and the Internet Protocol, also known as TCP/IP. The introduction of this concept was the first time the word “internet” was used. It was shorthand for the word “internetworking,” which reflects the internet’s initial purpose: to connect multiple computer networks.
The main function TCP/IP was to shift the responsibility of reliability away from the network and towards the host by using a common protocol. This means that any machine could communicate with any other machine regardless of which network it belonged to. This made it possible for many more machines to connect with one another, allowing for the growth of networks which much more closely resemble the internet we have today. By 1983, TCP/IP became the standard protocol for the ARPAnet, entrenching this technology into the way the internet works. However, from that point on the ARPAnet became less and less significant until it was officially decommissioned in 1990.
Part 2: The Internet Goes Mainstream
By the middle of the 1980s, the growth of the internet combined with the introduction of TCP/IP meant the technology was on the brink of going mainstream. However, for this to happen, massive coordination was needed to ensure the many different parties working to develop the internet were on the same page and working towards the same goal.
The first step in this process was to turn the responsibility of managing the development of the internet over to a different government agency. In the U.S., NASA, the National Science Foundation (NSF), and the Department of Energy (DOE) all took on important roles in the development of the internet. By 1986, the NSF created NSFNET, which served as the backbone for a TCP/IP based computer network.
This backbone was designed to connect the various supercomputers across the United States and to support the internet needs of the higher education community. Furthermore, the internet was spreading around the world, with networks using TCP/IP across Europe, Australia, and Asia. However, at this point, the internet was only available to a small community of users, mainly those in the government and academic research community. But the value of the internet was too great, and this exclusivity was set to change.
Internet Service Providers – ISPs (Late 1980s)
By the late 1980s, several private computer networks had emerged for commercial purposes that mainly provided electronic mail services, which, at the time, were the primary appeal of the internet. The first commercial ISP in the United States was The World, which launched in 1989.
Then, in 1992, U.S. Congress passed expanding access to the NSFNET, making it significantly easier for commercial networks to connect with those already in use by the government and academic community. This caused the NSFNET to be replaced as the primary backbone of the internet. Instead, commercial access points and exchanges became the key components of the now near-global internet infrastructure.
The World Wide Web and Browsers (Late 1980s-early 1990s)
The internet took a big step towards mainstream adoption in 1989 when Tim Berners-Lee from the European Organization for Nuclear Research (CERN) invented the World Wide Web, also known as “www,” or, “the web.” In the World Wide Web, documents are stored on web servers and identified by URLs, which are connected by hypertext links, and accessed via a web browser. Berners-Lee also invented the first Web Browser, called WorldWideWeb, and many others emerged shortly thereafter, the most famous being Mosaic, which launched in 1993 and later became Netscape.
The release of the Mosaic browser in 1993 caused a major spike in the number of internet users, largely because it allowed people to access the internet from their normal home or office computers, which were also becoming mainstream around this time. In 1994, the founder of Mosaic launched Netscape Navigator, which, along with Microsoft Internet Explorer, was the first truly mainstream web browser.
The subsequent Browser Wars, which resulted in the failure of Netscape and the triumph of Microsoft, made Netscape one of the many early internet players to rise quickly and fall just as fast. Many use this story to demonstrate the ruthlessness of Bill Gates’ business practices, but no matter what you think of the guy, this “war” between Netscape and Microsoft helped shape the early days of the internet.
Apart from making it easier for anyone to access the internet from any machine, another reason browsers and the World Wide Web were so important to the growth of the internet was that they allowed for the transfer of not only text but also images. This increased the appeal of the internet to the average person, leading to its rapid growth.
Part 3: The Internet Takes Over
By the middle of the 1990s, the Internet Age had officially begun, and since then, the internet has grown both in terms of the number of users but also in the way it affects society. However, the internet as we know it today is still radically different than the internet that first went mainstream in the years leading up to the turn of the millennium.
Growth of the Internet and the Digital Divide
All restrictions to commercial use of the internet were lifted in 1995, and this led to a rapid growth in the number of users worldwide. More specifically, in 1995, there were some 16 million people connected to the internet. By 2000, there were around 300 million, and by 2005, there were more than a billion. Today, there are some 3.4 billion users across the world.
However, most of this growth has taken place in North America, Europe, and East Asia. The internet has yet to reach large portions of Latin America and the Caribbean, the Middle East and North Africa, as well as Sub-Saharan Africa, largely due to economic and infrastructure challenges. This has left many with the fear that the internet will exacerbate inequalities around the world as opportunities provided to some are denied to others based on access to the web.
But the other side of the coin is that these regions are poised to experience rapid growth. East Asia had relatively few internet users in 2000, but that region now represents the majority of internet users in the world, although much of this is due to the rapid industrialization of China and the growth of its middle class.
The Internet Gets Faster
In its early years, computers required connection to a phone line to access the internet. This connection type was slow and it also created problems, the most famous being that it limited the number of people who could access the internet from a particular connection (Who doesn’t remember getting kicked off the internet when their mom or dad signed on or picked up the phone?)
As a result, shortly after the internet went mainstream, the public began demanding faster internet connections capable of transmitting more data. The response was broadband internet, which made use of cable and Direct Service Line (DSL) connections, and it rapidly became the norm. By 2004, half the world’s internet users had access to a high-speed connection. Today, the vast majority of internet users have a broadband internet connection, although some 3 percent of American’s still use a dial-up internet connection.
Another big driver of the growth of the web was the introduction of the concept known as “Web 2.0.” This describes a version of the web in which individuals play a more active role in the creation and distribution of web content, something we now refer to as social media.
However, there is some debate as to whether or not Web 2.0 is truly different from the original concept of the web. After all, social media grew up alongside the internet – the first social media site, Six Degrees, was launched in 1997. But no matter which side of the debate you fall on, there’s no doubt that the rise of social media sites such as MySpace and Facebook helped turn the internet into the cultural pillar that it has become.
The Mobile Internet
Perhaps the biggest reason the internet has become what it is today is the growth of mobile technology. Early cell phones allowed people to access the internet, but it was slow and modified. The Apple iPhone, which was released in 2007, gave people the first mobile browsing experience that resembled that which they got on a computer, and 3G wireless networks were fast enough to allow for email and web browsing.
Furthermore, WiFi technology, which was invented in 1997, steadily improved throughout the 2000s, making it easier for more and more devices to connect to the internet without needing to plug in a cable, helping make the internet even more mainstream.
WiFi can now be found almost anywhere, and 4G wireless networks connect people to the mobile internet with speeds that rival those of traditional internet connections, making it possible for people to access the internet whenever and wherever they want. Soon, we will be using 5G networks, which allow for even faster speeds and lower latency. But perhaps more importantly, 5G will make it possible for more devices to connect to the network, meaning more smart devices and a much broader understanding of the internet.
Part 4: The Future of the Internet
While the concept of the internet dates back to the 1950s, it didn’t become mainstream until the 1990s. But since then, it has become an integral part of our lives and has rewritten the course of human history. So, after all this rapid growth, what’s next?
For many, the next chapter of the history of the internet will be defined by global growth. As economies around the world continue to expand, it’s expected that internet use will as well. This should cause the total number of internet users around the world to continue to grow, limited only by the development of infrastructure, as well as government policy.
One such government policy that could dramatically impact the role of the internet in our lives is that of net neutrality. Designed to keep the internet a fair place where information is freely exchanged, net neutrality prohibits ISPs from offering preferred access to sites who choose to pay for it. The argument against net neutrality is that some sites, such as YouTube and Netflix, use considerably more bandwidth than others, and ISPs believe they should have the right to charge for this increased use.
However, proponents of net neutrality argue this type of structure would allow large companies and organizations to pay their way to the top, reducing the equality of the internet. In the United States, net neutrality was established by the FCC in 2015, under the Obama administration, but in 2018, this policy was repealed. At the moment, nothing significant has changed, but only time will tell how this shift in policy will affect the internet.
Another issue that could possibly affect the internet moving forward is the issue of censorship. Internet use around the world is often restricted, most famously in China, as a means of restricting the information available to people. In other parts of the world, specifically in the U.S, and Europe, these policies have not been enacted. However, in the era of fake news and social media, some companies, most notably Facebook, are taking action to slightly limit what people can say on the internet. In general, this is an attempt to limit the spread of hate speech and other harmful communications, but this is a gray area that has defined free speech debates for most of history and that will continue to be at the center of debates about the internet for years to come.
The internet has helped usher in a new age in human history, and we are just now beginning to understand how it will impact the way we live our lives. The fact that this tremendous cultural revolution has taken place in less than half a century speaks to the rapid nature of change in our modern world, and it serves as a reminder that change will continue to accelerate as we move into the future. You can read the full article on broadbandsearch.net.
SALT lending is a platform that provides Blockchain-Backed Loans. SALT (Secured Automated Lending Platform) enables you to put up your crypto as collateral in exchange for a cash loan. This strategy is ideal if you need to pay-off an unexpected expense or want to make a big purchase without having to sell-off your blockchain assets.
In this SALT lending guide, we’re going to outline:
SALT revolves around the company’s trademarked Blockchain-Backed Loans. Blockchain-Backed Loans are simply loans in which you hand over a blockchain asset, like Bitcoin, as collateral in exchange for traditional currencies. Unlike traditional auto or home loans, you can use these loans for any personal or business expense.
Originally, to use the SALT lending platform, you first needed to pay to become a member. There were three tiers of membership:
Base (1 SALT/year)
Premier (10 SALT/year)
Enterprise (100 SALT/year)
Higher membership tiers enabled you to borrow more money across additional currencies and gave you more flexible loan terms. Higher tiers also enjoyed a range of other perks such as early access to new products, portfolio management, and credit/debit cards.
Original SALT Lending Tier System
UPDATE: As of this writing, the company is reorganizing their memberships, so you’re unable to sign up for one. However, you’re still able to take out a loan. And, you no longer need SALT to do so. If you stake SALT for your loan, though, you’ll receive a better rate and/or terms. You need to provide several pieces of personal information to create an account and become a member. This information includes your first name, last name, valid email address, and country. You’re also subject to Know Your Customer (KYC) and Anti-Money Laundering (AML) restrictions, so be prepared to upload an ID.
On the other side of SALT are the lenders. Lenders have previously avoided dealing with cryptocurrencies because of the oftentimes complicated nature of the assets. SALT provides lenders with the infrastructure, compliance, and security they need to accept crypto collateral without adding additional costs to their current processes. In exchange for these services, lenders must also pay for a SALT membership.
Once again differing from traditional finance, SALT never inquires your credit score. Instead, the platform only uses the value of your crypto collateral to determine the terms of your loan. Lenders kick-off the loan process by posting the terms in which they’re willing to lend. As a borrower, you can look through the various terms and choose the one that’s best suited for you.
SALT Lending Process
Once you pick a loan, the loaners commit the cash funds while you provide collateral to a smart contract. The cash funds are sent directly to your bank account. You then pay monthly installments based on the loan terms, and when your loan is paid-off, SALT releases your collateral from the smart contract and returns it back to you.
The SALT Oracle creates the smart contracts for each loan and triggers the events of the loan. To lower the risk of default, the Oracle also records loan payments and monitors the changing value of the crypto collateral. Every loan starts with a loan-to-value ratio that’s calculated from the terms of the loan. This ratio is effectively the amount of the loan divided by the amount of collateral. For example, a $100,000 loan secured by $125,000 worth of Ethereum would have an original loan-to-value ratio of: $100,000 / $125,000 = 80.0%As you pay off the loan, this ratio decreases because the amount of the outstanding loan decreases. However, if the value of your collateral decreases due to a decline in the market price, this ratio will increase. If the ratio ever increases beyond the initial loan-to-value ratio, you’ll be required to either:
provide more collateral, or
pay-off an additional amount of the loan
until the ratio returns to the original level. The Oracle autonomously tracks the loan-to-value ratios and notifies the borrowers when it becomes too high. The amount of time a borrower has to correct the ratio differs based on the velocity of the price decline
SALT Lending Team & Progress
The SALT team is over 15 members strong and was led by Shawn Owen as CEO. Owen is a serial entrepreneur with years of experience in hospitality operations. In July 2018, Owen left the company leaving CTO Bill Sinclair to take his place. The most notable member of the SALT team is one of their advisors, Erik Voorhees. Voorhees is the founder and CEO of ShapeShift – one of the most popular crypto-to-crypto exchanges. SALT reached a big milestone in January 2018 by officially beginning to provide loans for top-tier members. The platform already has over 70,000 loans and has funded over $50,000,000 in those loans. Plans for 2018 included launching credit cards, creating loan funds, and expanding collateralization to other alternative coins as well. The team only hit some of those milestones. The company expanded support, adding Litecoin and Dogecoin loans. But, it looks as if credit cards and developer tools are still some time out.
SALT is the current leader in blockchain-based loans; however, there are a few other competitors popping up in the space. ETHLend and Elix are two younger competitors that provide decentralized lending on the Ethereum blockchain. SALT differentiates itself by focusing on institutional cash loans that are backed by cryptocurrency while the other two projects appear to have taken a peer-to-peer approach. Both use-cases should have a solid place in the market. Additionally, SALT is competing with more traditional platforms that provide crypto-backed loans but aren’t using a specific token.
SALT tokens, also known as membership tokens, are ERC20 tokens that you spend to become a member of the SALT lending platform. Furthermore, you can redeem these tokens to pay down loan interest, receive better rates on loans, and purchase items from SALT’s online store. At one point, these tokens held a different value on the lending platform than what they were trading for in the market. They used to be worth exactly $27.50 on the lending platform while trading at a value below that price. You could also previously pay-down the capital of your loans with SALT tokens. So, this created an interesting arbitrage opportunity. If you had the bankroll, you could technically get an Enterprise membership for $1200 and take out a $1M loan backed by $1.25M of Bitcoin. You could then turn around and buy $1M worth of SALT tokens from the market (~83,333 SALT). Because the SALT tokens were worth $27.50 on the platform you would only need to spend ~45,455 SALT tokens to pay back your loan. This would leave you with a little under 40,000 SALT tokens plus the original Bitcoin you put up as collateral – about a 40% return. The SALT team must have caught on to this scheme because they’ve since removed the opportunity.
SALT held their ICO in Q3 2017 in which you could purchase a membership token for $3.00 – $7.00 depending on the time that you bought it. There are a total of 120M SALT tokens, and just over 80M are currently circulating in the market. The SALT price briefly experienced the common “post-ICO” dip before spiking back up to a little over $7 in October 2017. Shortly after, the price fell to the $2-$4 (0.0003-0.0005 BTC) range and stayed there until December 2017.
Starting in December 2017, the price steadily rose and jumped to an all-time high of over $17 with the announcement that lending on the platform had finally begun. Since that high at the very end of 2017, the price has fallen drastically. Throughout 2018, the coin has lost over 98 percent of its value. It’s currently worth about $0.25. Because SALT isn’t required to use the lending platform, there’s not much that will cause the price to rise again. A healthy cryptocurrency market should help. And juicy enough membership incentives may also provide some positive demand-side pressure. Other than that, it’s tough to see this coin rising from the dead.
Where to Buy SALT
The most popular exchanges to purchase SALT are Binance and Bittrex. To trade for SALT on one of these exchanges you need to first have Bitcoin or Ethereum. If you don’t have either, you can purchase them with traditional currency on an exchange like Gemini and then transfer them over. For a full list of exchanges where you can buy SALT, check out CoinMarketCap.
Where to Store SALT
Because SALT is an ERC20 token, you have a few different options on where to store it. A popular online option is MyEtherWallet. The SALT website recommends that you use the Jaxx wallet and even provides instructions here. Jaxx is available on Android, iOS, Mac, Windows, Linux, and as a Chrome extension. The most secure way to store your tokens is by using a hardware wallet like Trezor or the Ledger Nano S. Using hardware wallets keeps your funds offline and out of the reach of hackers and ill-intended software.
The SALT lending platform is a great option if you want/need to make some real-world expenses and don’t want to lose the potential gains from your crypto holdings. Beyond that, the project works to solve a major problem of blockchain assets – illiquidity. By opening up an entirely new form of loans, the project brings more liquidity to the cryptocurrency market. The team has a solid foundation of blockchain experience and is advised by a leader in the industry. With a working platform in the market already, SALT is ahead of many other blockchain projects. That being said, there’s no requirement to use SALT tokens on the platform. So, it should make you wonder why the company has a specific token in the first place. Hopefully, the new membership tiers will make this more obvious. Editor’s Note: This article was updated by Steven Buchko on 12.04.2018 to reflect the recent changes of the project. This article was originally published at CoinCentral.com.
TRON CRYPTOCURRENCY is a digital currency created by controversial figure Justin Sun that aims to expand the area of decentralized applications (DApps) by making the tools for their creation and management more accessible to users.
Despite the fact that over 1000 DApps currently exist (mostly on the Ethereum blockchain), creating them is still a huge challenge for developers. This in turn limits blockchain adoption since the technology is best used in applications that normal people interact with.
One main challenge faced by developers is the complex protocols associated with blockchain technology. Since the technology is fairly new, developers have a problem figuring out how to fit their applications into the system.
This is where TRON comes in. It eases the transition, allowing developers to turn otherwise centralized applications into decentralized ones and create new DApps from scratch. TRON does this by simplifying the distribution of data without restrictions. Users can easily upload and store data in various content formats, including videos and audio files through content-enabled channels.
The TRON platform also rewards users with its asset known as TRX for uploading digital content. Usually, the user receives rewards proportional to the number of uploads they’ve done on the network. This sustains the platform by incentivizing users to upload even more files in the DApp creation process.
The most important value that TRON presents is its unflinching focus on people rather than enterprise. Using TRON cryptocurrency, users now have a cheap, secure and efficient way to keep their digital footprints intact. This way, it will be more difficult for them to fall prey to phishing ads on the internet.
The TRON transaction system also bears a huge advantage for users. They are all carried out on a public ledger, allowing anyone to trace transactions even without giving up user data. This system, known as the TRONix transaction system, uses a model UTXO, which unlocks preset amounts of TRON that users send using a defined set of rules.
HOW IS TRON DIFFERENT FROM OTHER CRYPTOCURRENCIES?
Apart from the fact that it cannot be mined, TRON bears even more differences to its peers. One of them is that it is mostly entertainment-focused. While the platform retains its decentralized use of blockchain, it is aimed at the global content and entertainment sector through its foundation.
The entertainment industry is rife with agents and other middlemen that make it difficult for creators to make and keep enough of their profits. TRON cuts out these middlemen, allowing digital content creators to receive money directly from their customers.
Traffic dependency on larger platforms like Twitter and Facebook are also greatly reduced since the process becomes more streamlined for creators. They are able to reach customers directly on the TRON platform, without the problem of filtering out a niche-specific target audience.
Other applications of TRON that set it apart from other cryptocurrencies include:
Providing a solution to the issue of data privacy and centralization, which limits content creators by streamlining them into a few companies. Such companies collect data from the general population to be implemented in their own operations. All monetary support for the TRON foundation is derived from the TRONix system.
Using a UTXO model to monetize content to allow its creators to reap the benefits of their hard work.
Guaranteeing privacy by eliminating the need for social ads so that the bigger social platforms have less access to user data.
HOW DOES IT WORK?
TRON is capitalizing on the peer-based systems in existence to ensure that creators aren’t cheated out of their work. The platform is achieving this by creating a bridge between them and their consumers. Like any other content-focused system, for the TRON ecosystem to work, several things must happen:
Creators have to create content which they want to distribute and be rewarded for.
Consumers have purchased this content, allowing the revenue to go directly to its creators.
DApps are built on the same principles that govern the entire system.
Code on the network is executed, and with time, determines the value of TRX cryptocurrency.
Another way that TRON is achieving better content distribution is by pushing decentralization, as opposed to the use of centralized servers.
HOW TO GET TRON:
Getting TRON is quite easy as long as users follow these steps:
Set up a TRON wallet which supports TRX tokens. Since TRX initially used the Ethereum ERC20 standard, it was compatible with Ether wallets. However, ever since its huge move to its main net in May 2018, there are dedicated wallets that are compatible with TRX now. Some of these wallets include TRON wallet (mobile & web), TRONscan wallet (web), Cobo wallet (mobile), Altcoin.io wallet (desktop client & web) and Ledger Nano wallet (hardware). These wallets are available for iOS, Android, Windows, and chrome.
As soon as the wallet is downloaded, users can now be issued public and private keys. The public key acts as a wallet address to be given to those who would like to send money to it. The private key, on the other hand, acts as a password and must never be disclosed to other parties.
TRX can be bought on exchanges like Binance, OKEXx, and HitBTC. This means that a user must buy another cryptocurrency such as Ethereum on an exchange like Coinbase first. After, the ETH can be exchanged for TRX.
MARKET PERFORMANCE & PROMINENT FIGURES WHO ARE BACKING TRON:
Comparing the price of TRX now ($0.0138) to what it was in June 2018, it is easy to see that the currency is currently underperforming in a pattern shared with other major digital currencies such as Bitcoin.
Initially, there was some speculation on whether or not TRX would be able to hit $1 before the end of 2018. While that currently seems unlikely, the cryptocurrency markets have continuously shown that a few days can make a huge difference. This is especially evident in the case of Bitcoin, which began December 2017 at a price point of $10,839 and reached its peak of more than $20,000 less than nine days later.
TRX was not left out of the bullish December with a 750% price increase at the end of December 2017 and a 12255% increase in January 2018.
Apart from the influential Justin Sun, TRON is supported by BITMAIN Co-founder, Jihan Wu, who has shown eagerness to purchase it in the past. Although Wu is a controversial figure in the industry, his endorsement has been a huge defining factor for the platform, which raised about $70,000,000 in its crowdfunding efforts.
Another big name backing TRON is Dai Wei, one of the most successful transport executives and TRON investors, as well as the founder of Ofo. BTCC.com founder Yang Linke and popular business magnate Yin Mingshan are also backing the project
TRON is working to solve one of the biggest problems that accompanied global acceptance of the internet: fair content distribution. The decentralization of the entire process reduces the overall cost incurred and ensures the longevity of the distribution system.
TRON is moving closer to its grand vision every day, along with TRX, its digital asset. Maybe eventually, the centralized model of content delivery will be eradicated completely, giving rise to a fairer, decentralized ecosystem.
To understand cryptocurrency, you first need to understand the history of the virtual coin. Virtual coin, or digital currency, differs from traditional fiat currency in that there is no physical representation to accompany each unit of value. Long before the crypto craze of 2017, there were the pioneers of electronic cash.
These early financial freethinkers shaped the market and prepared the world for the digitization of the economy. Fast forward to today’s crypto market, and thanks to the efficiency of blockchain technology, you now see digital money experiencing increased adoption globally.
Virtual Coin Before Bitcoin
Over the last twenty years, virtual currency payment systems have flourished. These systems existed long before the advanced blockchain technology of today made virtual coins, such as Bitcoin, a household name.
Virtual currencies are not considered a legal tender in most countries for many reasons. Governments do not issue these coins. Instead, virtual currencies are issued by the platform’s developers. Today, there are many ways in which companies release virtual coins. Before cryptocurrencies, bonuses were the most popular form of issuing digital currency.
Problems with Early Virtual Coin
The problem with these early virtual coins was that they were too susceptible to attack from hackers or scammers. Additionally, the lack of any form of material confirmation left many businesses skeptical of the true value behind these concepts. Remember, this was 20 years prior to the emergence of blockchain technology. Many people didn’t even own a computer at this time.
One of the first examples of an attempt to create a digital cash comes from the Netherlands in the late 1980s. Gas station owners were fed up with their businesses being robbed. The problem got so bad that someone decided it would be safer to find a way to place money onto smart cards. Trucking company owners could load these cards and give them to their drivers. The drivers would then use the cards at the participating gas stations.
Another example of virtual coin usage before cryptocurrency is the internet currency Flooz. Flooz.com issued these virtual coins in 1998 as part of a marketing campaign. Each Flooz equaled one dollar in value.
Users would receive Flooz coin for purchases made on the Flooz.com website. Users could then spend their Flooz on more products on the site, or they could take their bonus earnings and shop at numerous other participating vendors.
The concept was well before its time, and despite a multi-million dollar advertising campaign, Flooz never achieved the level of adoption required to sustain the project. The platform also took heavy losses after a combination of Russian and Filipino hackers made purchases using stolen credit cards. The company is now defunct, but the inspiration of their project lives on in the digital currencies of today.
The protocol was revolutionary and introduced the world to the concept of blind signatures. Blind signatures disguise the content of a message and utilize a combination of a public and private password to verify the validity of the participants. Today, this concept is used in major cryptocurrencies in the form of public keys.
A decade after DigiCash entered the market, a developer by the name of Wei Dai made waves in the cryptographic community after publishing a paper called B-Money Anonymous, Distributed Electronic Cash System. This virtual coin concept utilized a decentralized network, which included private sending capabilities and auto-enforceable contracts. While the technical aspects of this virtual coin were far from blockchain technology, and the project never gained enough momentum to take flight, the concept greatly influenced the future cryptocurrency market.
Longtime virtual coin advocate, Nick Szabo, pioneered the proof-of-work system used today by cryptocurrencies such as Bitcoin with his Bit Gold protocol. Bit Gold helped to cement the concept of a decentralized network and the elimination of third-party verification systems. Satoshi borrowed many of Bit Gold’s concepts. So much so, that many people believed Satoshi Nakamoto to be Nick Szabo. It took a public denial by Szabo to finally convince people otherwise.
Many crypto enthusiasts consider HashCash to be the direct inspiration for Bitcoin. HashCash was introduced in 1997 by cryptographer Adam Beck. Beck incorporated a proof-of-work protocol to verify the validity of a transaction. HashCash’s proof-of-work protocol previously saw use as a spam reduction technique before being adapted for virtual coins. The concept impacted Nakamoto to the point that he cites Beck in the Bitcoin white paper. However, HashCash lost relevance after suffering scalability issues due to increased network congestion.
Enter Bitcoin and the Wonders of Blockchain
Satoshi Nakamoto, the anonymous creator of Bitcoin, labels the world’s most successful cryptocurrency as “a peer-to-peer electronic cash system.” Unlike the virtual coins before Bitcoin, Satoshi had the backing of powerful new technology, blockchain.
Blockchain technology stores information in duplicate over a vast network of computers. Prior to any changes to the blockchain, fifty-one percent of the nodes must agree on the validity of the chain. This validation process references the previous blocks. The longest verifiable chain of transactions remains the valid chain. In this manner, blockchain technology eliminates the need for third-party verification systems such as banks.
Advantages of Virtual Coins
The advent of blockchain technology increased the benefits of virtual coins significantly. Users can transfer cryptocurrency in near instant time, and at a much cheaper rate than the traditional methods of global money transfer. Cryptocurrencies eliminate the need for verification systems, which saves you big time.
To put the level of savings experienced by crypto users into perspective, look at the traditional international money transfer methods. Today, your funds require verification from potentially thirty-six different third-party organizations when sent globally. And, each organization tacks on a fee for their services.
Virtual Coins Are Here to Stay
Now, the world has multiple legitimate virtual currency options to consider. These digital currencies continue to shape the economics of the future. Today, virtual coins are more common than ever, and for the first time in history, digital money receives recognition from traditional financial firms.