Web security and privacy myths

Web security and privacy myths

Security and privacy on the internet affects nearly everyone nowadays. However, there are many myths rattling around. We discuss and debunk five common myths in this article.

Contents

  1. Myth 1: Scanning a server externally discovers all security and privacy issues
  2. Myth 2: Deploying arbitrary HTTP response headers enhances security
  3. Myth 3: The presence of HTTPS means everything is secure
  4. Myth 4: Embedded external resources are insecure
  5. Myth 5: JavaScript and Cookies are only used for tracking
  6. Summary

Always stay in the loop!
Subscribe to our RSS/Atom feeds.

Myth 1: Scanning a server externally discovers all security and privacy issues

One common myth we encounter is that an external scan of a website discovers all security and privacy issues. This is wrong, of course. We discussed several weaknesses of external scanners (esp. online scanners) in our articles “Pros and cons of online assessment tools for web server security” and “Limits of Webbkoll”.

No external scanner can discover every issue and scanners can show wrong results, for instance:

  • CryptCheck shows that HSTS and HSTS_LONG aren’t available under certain circumstances
  • High-Tech Bridge’s ImmuniWeb WebScan shows that all HTTP types (POST, OPTIONS, TRACE, etc.) are enabled if a web application firewall is in place
  • PrivacyScore gets blocked by common web application firewalls due to being very noisy during testing which results in no findings at all
  • The Observatory by Mozilla doesn’t recognize TLS 1.3 and ChaCha20-Poly1305 cipher suites
  • Webbkoll’s results are only valid for the single page scanned by it. The results for all other pages of the same web server can be totally different.

There are even more reasons for incomplete results:

  • Normally, external scanners can’t access server configuration files, information about installed packages and their versions and information about the operating system of the server. Moreover, external scanners can’t assess authentication mechanisms (like password strength) and other security features on the server.
  • Some people think that there is always one single server on the other side of their connection. This is also wrong (see “distributed computing” and “transparency in distributed systems” for further information). Big web applications can consist of dozens of different physical servers. These servers can run different server software like web servers, mail servers and database servers. In most cases, scanning a website with online scanners only evaluates the web server while other server software remains untouched.
  • Scanning a web server is purely focused on technology while information security is also about people and processes. A “secure” website doesn’t secure your data if the underlying database server stores your unencrypted personal data somewhere on the internet, accessible for everyone. Another example is a negligent employee of a company who leaves his laptop unattended. Then an attacker infects his laptop with malware and thousands of data sets are leaked—despite the fact that all online scanners showed no issues.
  • Scans are snapshots. Administrators can enable and disable security features prior to or after scanning. Most scanners don’t store results.

There are many more examples how data can be leaked or servers can be hacked, even if all online scanners show green color and lots of “A” ratings. On the other hand, a small handicraft business with no web security at all will very likely leak no data and is still secure if all data is processed on other computers.

Myth 2: Deploying arbitrary HTTP response headers enhances security

Some administrators arbitrarily deploy HTTP response headers only to satisfy above-mentioned online assessment tools. Their goal: Get an “A+” rating and show all visitors how secure their server is.

Does this mean that their website is more secure? No, for several reasons:

  • Some HTTP response headers like the Content Security Policy allow insecure configuration and can be set to “allow everything”. This means that there is a header, however, it doesn’t do anything. Most online assessments tools recognize this and show warnings while most web browsers don’t display such warnings in general.
  • Several HTTP response headers need “report-to” or “report-uri” directives (e.g. Expect-CT, Expect-Staple, CSP). The idea is that problems which occurred in relation to these headers are sent to the web server as a report. We’ve seen several web server that deployed such headers but didn’t set any report directives. Such response headers are useless since error reports remain undelivered.
  • Another very important point is that all HTTP response headers require client-side support. Clients must understand the header and act on its directives. If a client doesn’t support or ignores a HTTP response header, there is no gain in security at all. Actually, this happens from time to time because some HTTP headers get deprecated (like HPKP) and others are only adopted after years of testing (like Report-To or NEL).
  • While most major web browsers implement wide-spread security-related HTTP response headers, some applications with built-in browsing capabilities or “leisure project web browsers” may not support any of these HTTP response headers at all.

All in all, long lists of random HTTP response headers that should demonstrate “security” are most likely used for self-marketing.

Myth 3: The presence of HTTPS means everything is secure

Thanks to initiatives like Let’s Encrypt and visual indicators in web browsers, HTTPS became the quasi-standard on the internet. It is common to look for the “lock icon” and green color in your web browser. The result is that many people belief their internet connection is secure since there is “https://” displayed in their web browser. This is wrong.

HTTPS means that there is an encrypted connection to somewhere on the internet. As long as you don’t check the provided certificate, you can’t be sure if you are really connected with the expected web server. In general, web browsers only check whether a certificate is valid for the domain name. There are different technologies used for this (e.g. Certificate Transparency and OCSP). However, web browsers can’t tell you if you are connected to the expected web server. Good examples are proxies like the Startpage proxy: If you use the anonymous mode, you are connected to a Startpage server and the Startpage server is connected to the target website. Of course, this means that the proxy can inspect the complete traffic in cleartext—despite the fact that there is HTTPS in place. Companies use this technology to inspect encrypted data traffic.

Furthermore, outdated TLS versions and cipher suites with weak/insecure algorithms can be in use while your web browser shows HTTPS. The result is weak encryption and a connection that isn’t probably more secure than HTTP.

Myth 4: Embedded external resources are insecure

Another wrong blanket statement is that embedded external resources are insecure since they can be arbitrarily changed by the third party which hosts these resources.

There is a security feature called Subresource Integrity (SRI). The sole purpose of SRI is to ensure the integrity of embedded third-party content. The basic idea is that the server provides one hash per external resource. Your client gets the external resource and the hash value. Then, the client calculates its own hash value for the external resource and compares it with the one provided by the server. If the external resource is changed, the hashes provided by the server and calculated by the client don’t match anymore and the resource is discarded.

So, external content can be securely embedded with SRI. It remains secure as long as the administrator ensures that the third-party content isn’t malicious prior to creating the hash values and attackers can’t create malicious content that matches the hash value. Of course, SRI must be supported by clients.

Myth 5: JavaScript and Cookies are only used for tracking

The final statement in this article is about the supposed malice of JavaScript and Cookies. Both technologies may be used for tracking purposes, however, there are totally legitimate applications:

  • Cookies are needed to store session data like passwords and other client-specific information since pure HTTP/HTTPS is stateless. Without cookies nobody would be able to log in to any website.
  • JavaScript is needed to add client-side behavior to static HTML pages (“dynamic HTML”). Some actions like encrypting/decrypting text in a web browser prior to sending it to a web server, animations or input validation are conducted locally in your web browser and aren’t supposed to be executed on the server.
  • Furthermore, JavaScript is widely used by web applications to manipulate cookies on the client.

Disabling JavaScript and Cookies due to unjustified fear and uncertainty will very likely render many websites unusable and you won’t be able to log in anywhere. Whitelisting can be used to block unwanted JavaScript and Cookies, however, you can’t be sure whether whitelisted content remains unchanged.

Follow us on Mastodon:
@infosechandbook

Summary

Most of the above-mentioned myths are somewhat persistent and sometimes even boosted by media and non-technical people. However, all of them are wrong.

In summary, …

  • Always check certificates in your web browser
  • Long lists of random HTTP response headers are mostly used for self-marketing
  • Scanning a web server only discovers a very small fraction of its configuration
  • Embedded content can be used securely if SRI is in place
  • JS and Cookies aren’t only used for tracking purposes

See also