This blog has moved! Please update your
bookmarks to http://www.blog.modsecurity.org.
Posted by rcbarnett on January 30, 2008.
The recent Geeks.com compromise and subsequent articles have created a perfect storm of discussion topics and concerns related to web security. The underlying theme is that true web security encompasses much more than a Nessus scan from an external company.
The concepts (and much of the text) in this post are taken directly from a blog post by Richard Bejtlich on his TaoSecurity site. I have simply tailored the concepts specifically to web security. Thanks goes to Richard for always posting thought provoking items and making me look at web security through a different set of eyes. You know what they say about imitation ;)
The title of this post form the core of most of my recent thinking on web application security. Since I work for a commercial web application firewall company and am the ModSecurity Community Manager, I often get the chance to talk with people about web application security. While I am not a "sales" guy, I do hang out at our vendor booth when we are at conferences. I am mainly there to field technical questions and just interact with people. I have found that the title of this post is actually one of the absolute best questions to ask someone when they first come up to our booth. It always sparks interesting conversation and can shed light onto specific areas of strength and weakness.
Obviously having logos posted on a web site that tout "we are secure" is really just a marketing tactic aimed to re-assure potential customer that it is safe to purchase goods at their site. The reality is that these logos are non-reliable and make no guarantee as to the real level of security offered by the web site. At best, they are an indication that the web site has met some sort of minimum standard. But that is a far cry from actually being secure.
Let me expand "secure" to mean the definition Richard provided in his first book: Security is the process of maintaining an acceptable level of perceived risk. He defined risk as the probability of suffering harm or loss. You could expand my six word question into are you operating a process that maintains an acceptable level of perceived risk for your web application?
Let's review some of the answers you might hear to this question. I'll give an opinion regarding the utility of the answer as well. For the purpose of this exercise let's assume it is possible to answer "yes" to this question. In other words, we just don't answer "no." We could all make arguments as to why it's impossible to be secure, but does that really mean there is no acceptable level of perceived risk in which you could operate? I doubt it. Let's take a look at the varios levels of responses.
Then, crickets (i.e., silence for you non-imaginative folks.) This is completely unacceptable. The failure to provide any kind of proof is security by belief. We want security by fact. Think of it this way, would auditors accept this answer? Could you pass a PCI Audit by simply responding yeah, we are secure. Nope, you need to provide evidence.
This is better, but it's another expression of belief and not fact. The only fact here is that technologies can be abused, subverted, and broken. Technologies can be simultaneously effective against one attack model and completely worthless against another.
This also reminds me of another common response I hear and that is - yes, we are secure because we use SSL. Ugh... Let me share with you one personal experience that I had with an "SSL Secured" website. Awhile back, I decided to make an online purchase of some herbal packs that can be heated in the microwave and used to treat sore muscles. When I visited the manufacturer's web site, I was dutifully greeted with a message "We are a secure web site! We use 128-bit SSL Encryption." This was reassuring as I obviously did not want to send my credit card number to them in clear text. During my checkout process, I decided to verify some general SSL info about the connection. I double-clicked on the "lock" in the lower-right hand corner of my web browser and verified that the domain name associated with the SSL certificate matched the URL domain that I was visiting, that it was signed by a reputable Certificate Authority such as VeriSign, and, finally, that the certificate was still valid. Everything seemed in order so I proceeded with the checkout process and entered my credit card data. I hit the submit button and was then presented with a message that made my stomach tighten up. The message is displayed next; however, I have edited some of the information to obscure both the company and my credit card data.
The following email message was sent:
To:email@example.com From: RCBarnett@email.com Subject:ONLINE HERBPACK!!! name: Ryan Barnett address: 1234 Someplace Ct. city: Someplace state: State zip: 12345 phone#: Type of card: American Express name on card: Ryan Barnett card number: 123456789012345 expiration date: 11/05 number of basics: Number of eyepillows: Number of neckrings: 1 number of belted: 1 number of jumbo packs: number of foot warmers: 1 number of knee wraps: number of wrist wraps: number of keyboard packs: number of shoulder wrap-s: number of cool downz: number of hats-black: number of hats-gray: number of hats-navy: number of hats-red: number of hats-rtcamo: number of hats-orange: do you want it shipped to a friend: name: their address: their city: their state: their zip: cgiemail 1.6
I could not believe it. It looked as though they had sent out my credit card data in clear-text to an AOL email account. How could this be? They were obviously technically savvy enough to understand the need to use SSL encryption when clients submitted their data to their web site. How could they then not provide the same due diligence on the back-end of the process?
I was hoping that I was somehow mistaken. I saw a banner message at the end of the screen that indicated that the application used to process this order was called "cgiemail 1.6." I therefore hopped on Google and tried to track down the details of this application. I found a hit in Google that linked to the cgiemail webmaster guide. I quickly reviewed the contents and found what I was looking for in the "What security issues are there?" section:
Interception of network data sent from browser to server or vice versa via network eavesdropping. Eavesdroppers can operate from any point on the pathway between browser and server.
Risk: With cgiemail as with any form-to-mail program, eavesdroppers can also operate on any point on the pathway between the web server and the end reader of the mail. Since there is no encryption built into cgiemail, it is not recommended for confidential information such as credit card numbers.
Shoot, just as I suspected. I then spent the rest of the day contacting my credit card company about possible information disclosure and to place a watch on my account. I also contacted the company by sending an email to the same AOL address outlining the security issues that they needed to deal with. To summarize this story: Use of SSL does not a "secure site" make.
Generally speaking, regulatory compliance is usually a check-box paperwork exercise whose controls lag attack models of the day by one to five years, if not more. PCI is somewhat of an exception as it attempts to be more operationally relevant and address more current web application security issues. While there are some admirable aspects of PCI, please keep this mantra in mind -
It is much easier to pass a PCI audit if you are secure than to be secure because you pass a PCI audit.
PCI, like most other regulations, are a minimum standard of due care and passing the audit does make your site "unhackable." A compliant enterprise is like feeling an ocean liner is secure because it left dry dock with life boats and jackets. If regulatory compliance is more than a paperwork self-survey, we approach the realm of real of evidence. However, I have not seen any compliance assessments which measure anything of operational relevance. Check out Richard's Blog posts on Control-Compliant security for more details on this concept and why it is inadequate. What we really need is more of a "Field-Assessed" mode of evaluation. I will discuss this concept more in depth in future Blog posts.
This is getting close to the right answer, but it's still inadequate. For the first time we have some real evidence (logs) but these will probably not provide the whole picture. I believe that how people deploy and use a WAF is critical. Most people deploy a WAF in an "alert-centric" configuration which will only provide logs when a rule matches. Sure, these alert logs indicate what was identified and potentially stopped, but what about activities that were allowed? Were they all normal, or were some malicious but unrecognized by the preventative mechanism? Deploying a WAF as an HTTP level auditing device is a highly under-utilized deployment option. There is a great old quote that sums up this concept -
Some would call this rationale the definition of security. Whether or not this answer is acceptable depends on the nature of the indications. If you have no indications because you are not monitoring anything, then this excuse is hollow. If you have no indications and you comprehensively track the state of a web application, then we are making real progress. That leads to the penultimate answer, which is very close to ideal.
This is really close to the correct answer. The absence of indications of intrusion is only significant if you have some assurance that you've properly instrumented and understood the web application. You must have trustworthy monitoring systems in order to trust that a web application is "secure." The lack of robust audit logs is usually the reason why organizations can not provide this answer. Put it this way, Common Log Format (CLF) logs are not adequate for full web based incident responst. Too much data is missing. If this is really close, why isn't it correct?
Here you see the reason why number 6 was insufficient. If you assumed that number 6 was OK, you forgot to ensure that your operations were up to the task of detecting and responding to intrusions. Periodically you must benchmark your perceived effectiveness against a neutral third party in an operational exercise (a "red team" event). A final assumption inherent in all seven answers is that you know the assets you are trying to secure, which is no mean feat. Think of this practical exercise, if you run a zero-knowledge (meaning un-announced to operations staff) web application penetration test, how does your organization respond? Do they even notice the penetration attempts? Do they report it through the proper escalation procedures? How long does it take before additional preventative measures are employed? Without answers to this type of "live" simulation, you will never truly know if your monitoring and defensive mechanisms are working.
Indirectly, this post also explains why only doing one of the following: web vulnerability scanning, penetration testing, deploying a web application firewall and log analysis does not adequately ensure "security." While each of these tasks excel in some areas and aid in the overall security of a website, they are each also ineffective in other areas. It is the overall coordination of these efforts that will provide organizations with, as Richard would say, a truly "defensible web application."
Posted by rcbarnett at January 30, 2008 12:45 PM