Banner image of Web security and privacy myths

Web security and privacy myths

Security and privacy on the internet affect nearly everyone nowadays. However, many myths are rattling around. We discuss and debunk five common myths in this article.

Always stay in the loop!
Subscribe to our RSS/Atom feed.

Myth 1: Externally scanning a server discovers all security and privacy issues

One common myth we encounter is that an external scan of a website discovers all security and privacy issues. This is wrong, of course. We discussed several weaknesses of external scanners (esp. online scanners) in our articles “Pros and cons of online assessment tools for web server security” and “Limits of Webbkoll.”

No external scanner can discover every issue, and scanners can show wrong results, for instance:

  • CryptCheck shows that “HSTS” and “HSTS_LONG” aren’t available under certain circumstances.
  • High-Tech Bridge’s ImmuniWeb WebScan shows that all HTTP types (POST, OPTIONS, TRACE, etc.) are enabled if a web application firewall is in place.
  • PrivacyScore gets blocked by standard web application firewalls due to being very noisy during testing, which results in no findings at all.
  • The Observatory by Mozilla doesn’t recognize TLS 1.3 and ChaCha20-Poly1305 cipher suites.
  • Webbkoll’s results are only valid for the single page scanned by it. The results for all other pages of the same web server can be different.

There are even more reasons for incomplete results:

  • Typically, external scanners can’t access server configuration files, information about installed packages and their versions, and information about the server’s operating system. Moreover, external scanners can’t assess authentication mechanisms (like password strength) and other security features on the server.
  • Some people think that there is always one single server on the other side of their connection. This is also wrong (see “distributed computing” and “transparency in distributed systems” for further information). Big web applications can consist of dozens of different physical servers. These servers can provide numerous services like web servers, mail servers, and database servers. In most cases, scanning a website with online scanners only evaluates the web server while other server software remains untouched.
  • Scanning a web server is purely focused on technology, while information security is also about people and processes. A “secure” website doesn’t secure your data if the underlying database server stores your unencrypted personal data somewhere on the internet, accessible for everyone. Another example is a negligent employee of a company who leaves his laptop unattended. Then an attacker infects his laptop with malware, and thousands of data sets are leaked—even though all online scanners showed no issues.
  • Scans are snapshots. Administrators can enable and disable security features before or after scanning. Most scanners don’t store results.

There are many more examples of how data can be leaked or servers can be hacked, even if all online scanners show green color and lots of “A” ratings. On the other hand, a small handicraft business with no web security will likely leak no data and is still secure if all data is processed on different computers.

Myth 2: Deploying arbitrary HTTP response headers enhances security

Some administrators arbitrarily deploy HTTP response headers only to satisfy the above-mentioned online assessment tools. Their goal: Get an “A+” rating and show all visitors how secure their server is.

Does this mean that their website is more secure? No, for several reasons:

  • Some HTTP response headers like the Content Security Policy allow insecure configuration and can be set to “allow everything.” This means that there is a header; however, it doesn’t do anything. Most online assessment tools recognize this and show warnings, while most web browsers don’t display such warnings in general.
  • Another essential point is that all HTTP response headers require client-side support. Clients must understand the header and act on its directives. If a client doesn’t support or ignores an HTTP response header, there is no security gain at all. This happens from time to time because some HTTP headers get deprecated (like HPKP), and others are only adopted after years of testing (like Report-To or NEL).
  • While most major web browsers implement wide-spread security-related HTTP response headers, some applications with built-in browsing capabilities or “leisure project web browsers” may not support any of these HTTP response headers at all.

All in all, long lists of random HTTP response headers that should demonstrate “security” are most likely used for self-marketing.

Myth 3: The presence of HTTPS means everything is secure

Thanks to initiatives like Let’s Encrypt and visual indicators in web browsers, HTTPS became quasi-standard. It is common to look for the “lock icon” and green color in your web browser. The result is that many people believe their internet connection is secure since there is “https://” displayed in their web browser. This is wrong.

HTTPS means that there is an encrypted connection to somewhere on the internet. As long as you don’t check the provided certificate, you can’t be sure if you are connected with the expected web server. In general, web browsers only check whether a certificate is valid for the domain name. There are different technologies used for this (e.g., Certificate Transparency and OCSP). However, web browsers can’t tell you if you are connected to the expected web server. Good examples are proxies like the Startpage proxy: If you use the “anonymous mode,” you connect to a Startpage server, and the Startpage server connects to the target website. Of course, this means that the proxy can inspect the complete traffic in cleartext—even though there is HTTPS in place. Companies use this technology to inspect encrypted data traffic.

Furthermore, your web browser and the web server can use outdated TLS versions and cipher suites with weak/insecure algorithms with HTTPS. The result is weak encryption and a connection that isn’t probably more secure than HTTP.

Myth 4: Embedded external resources are insecure

Another wrong blanket statement is that embedded external resources are insecure since third parties hosting the resources can arbitrarily change them.

There is a security feature called Subresource Integrity (SRI). The sole purpose of SRI is to ensure the integrity of embedded third-party content. The basic idea is that the server provides one hash per external resource. Your client gets the external resource and the hash value. The client then calculates its own hash value for the external resource and compares it with the one provided by the server. If the external resource is changed, the hashes provided by the server and calculated by the client don’t match anymore. The web browser discards the resource.

So, external content can be securely embedded with SRI. It remains secure as long as the administrator ensures that the third-party content isn’t malicious before creating the hash values, and attackers can’t create malicious content that matches the hash value. Of course, clients must support SRI.

Myth 5: JavaScript and Cookies are only used for tracking

The final statement in this article is about the supposed malice of JavaScript and Cookies. You can use both technologies to track users; however, there are legitimate applications:

  • Cookies are needed to store session data like passwords and other client-specific information since pure HTTP/HTTPS is stateless. Without cookies, nobody would be able to log in to any website.
  • JavaScript is needed to add client-side behavior to static HTML pages (“dynamic HTML”). Some actions like encrypting/decrypting text in a web browser before sending it to a web server, animations, or input validation are conducted locally in your web browser. They aren’t supposed to be executed on the server.
  • Furthermore, JavaScript is widely used by web applications to manipulate cookies on the client.

Disabling JavaScript and Cookies due to unjustified fear and uncertainty will likely render many websites unusable, and you won’t be able to log in anywhere. An allowlist can be used to block unwanted JavaScript and Cookies; however, you can’t be sure whether content on the allowlist remains unchanged.

Summary

Most of the myths, as mentioned earlier, are somewhat persistent and sometimes even boosted by media and non-technical people. However, all of them are wrong.

In summary:

  • Always check certificates in your web browser.
  • Long lists of random HTTP response headers are mostly used for self-marketing.
  • Scanning a web server discovers only a tiny fraction of its configuration.
  • Websites can embed third-party content securely with SRI.
  • Websites use JavaScript and Cookies for legitimate purposes.

Read also