Total Marks: 80 marks
Question 1 (10 marks)
Two common tools that are used by marketing companies to gain information about Internet users are web bugs and cookies. Briefly explain:
(i)
how each of these
technologies works
(3 marks each)
(ii) preventative measures that users can take to limit or stop what information can be gathered with these tools.
(2 marks each)
Your explanation should be one to two pages in total for both tools.
A cookie is a mechanism invented to provide session state to the statelessness HTTP protocol. Cookies are messages that are stored via a web browser. This allows a server to store and retrieve information from the browser during a connection.
While this
methodology was developed to allow persistent client state, it has been
increasing utilised to track individual web usage, the most common commercial
usage is to track web user browsing habits. No damage can occur to files on the
browser host by using cookies. A cookie is a short piece of data is sent
between client and server – it does not contain code. JavaScript, Java, and
ActiveX can also access cookies but they are limited by the controls enforced by
the browser software.
For each HTTP
request a cookie field can be added to the header, which is to be stored by the
client (for later retrieval). For each HTTP request sent by the browser to a
server will be matched against the cookie storage and all cookies matching the
URL (host/path) specification are sent.
Because of the
usefulness of cookies they have been used to extending the functionality of
websites, such as personal preferences, electronic basket contents or login
state via cookies. Two types of cookies exist; session cookies that last as
long as the browser session is open and destroyed at browser close. The second
are persistent cookies that have an expiry date, which is stored by the browser
until that date.
Most advertising
based cookies are actually image requests to third party servers, which also
include cookie information in the separate HTTP communication established.
Several mechanisms
have been developed to minimise the information provided via cookies about user
web browsing habits. Most browsers have course gained cookie blocking or popup
warning/request windows, which can be tedious to use depending on the selected
level of paranoia (including concepts of first and third party cookies).
Third party
software can also utilised to incept cookie traffic, and can also be used to
review/modify the contents of the current cookie cache. A commercial
corporation or government agency could also block all cookies via content
filtering at their firewalls.
Web bugs or beacon
GIFs, are invisible (usually a 1x1 GIF) images on a web page, which usually
used to help track user activity. Typically this would be to an alternative or
third party site, which would also include cookie header information. It is
also possible that these collection sites could be able to build personal
profile of the user.
Web bugs can be imbedded in web pages, html encoded e-mail, and with the greatly enhanced feature sets inside word, excel, powerpoint and other now html enabled applications. In the later cases the hidden bug can also act as a fingerprint for tracking fraud or copyright infringement.
Some advert
blocking software is available to prevent web bugs appearing within your
browser. They are designed to inhibit all third party content from being
requested. Alternatively privacy software is available to display these bugs to
the user by displaying a user defined image – it doesn’t stop the requests and
possible data exchange, but the user is at least informed of the existence of
the poll.
Question 2 (10 marks)
You are a security administrator in a medium sized company. You want to install an intrusion detection system to enhance the company’s network security. Your manager has said that the company already has a firewall and doesn’t see why she should authorise the purchase of an intrusion detection system as well.
Write a report (one to two pages), describing what an intrusion detection system can do and why you feel it is necessary to have one as well as a firewall. Assume your manager is reasonably IT literate, but knows little about network security issues.
A firewall is a security enforcement device, it’s objective is to provide
controlled access between computing network environments. An intrusion
detection system is used to monitor (and in cases automatically respond) to
network and host based ‘events’ in real-time.
Firewalls are implemented at control or security enforcing points of a
network. Their purpose is to provide controlled access based on a security
access policy. Typically a firewall is used to protect internal trusted systems
and servers from the external untrusted environment. Keeping intruders and
malicious code away, while sensitive information is protected.
Policies on allowable traffic flows can be implemented on a firewall by
packet header filtering, protocol aware application proxy, or even content
control. While these security-enforcing devices have verbose logging and
reporting apparatus, they mostly oriented towards providing a usage audit trail
and not providing real-time ‘alerting’, and is limited by it’s understanding
endless number of attack methodologies that are being created all the time.
While the logging functionality within most firewalls is well developed, and it
has enough smarts to detect malicious data, can it be reliable if an incident
occurs with the possibility of that system itself was compromised.
Network based intrusion detection systems (NIDS), are tools that run on a
dedicated host system and monitor network activity promiscuously for a known
network packet sequence. When a signature or a statistical anomy is triggered
the system will then send alerts, or perform an active response such as the
wilful termination of the offending ‘session’.
A form of verification of the security enforcing properties of the firewall
can be observed in ‘real-time’ by an externally located NIDS but of which the
internally located NIDS sensor never observes.
A host based intrusion detection systems (HIDS), is a form of IDS that
have a host centric view of network traffic, and also depend on application
logs and system audit logs that are created by the installed applications and
the operating system itself. The host centric view of network traffic is
essential for trying to observe in-band attacks in SSL or IPSec encrypted traffic
that terminates on that host. Additional host resources are monitored such as
file ownership, permissions, and checksums, along with the network sockets, and
process start-up and shutdown. Correlation of NIDS and HIDS information can
also verify that a observed attack against a system had no effect.
Releasing of firewall patches is generally slow and conservative, while
most IDS signatures updates can be created and distributed relatively fast
(just like virus updates). While number of false positives is commonly the
greatest complaint about the usage of IDS the additional use of an integrated
correlation tool (such as a scanner that performs vulnerability assessment)
will eliminate the vast majority.
IDS offers a form of monitoring of security policy within the firewall
environment as well as anomaly detection, it is envisaged that the IT audit
section would have remote ‘real-time’ access to the alerts generated by the
system as a form of random audit of the environment, and generally verify that
the firewall administrator is competent in their tasks.
The use of IDS along with correlated information from the firewall and
regular vulnerability assessment should ensure that unusual or damaging events
are prevented before they occur, or at least alert staff to vulnerabilities
that exist. Information provided by each of these systems will provide numerous
associative records that could be used for legal forensic evidence for any
prosecution of intruder or staff exploitations of the firewall environment.
Question 3 (20 marks)
You have been asked to research recent network security incidents (not just vulnerabilities, actual attacks or attempted attacks). Find two such incidents which were reported recently (no earlier than July 2001). Write a report (one page for each report) describing the two incidents. In your response include:
(i) date of incident and a description of how the incident occurred or the method of attack (3 marks each)
(ii) the resulting effect of the incident on the network and the organisation owning the network (2 marks each)
(iii) a recommendation on how to avoid possible future attacks of this type (3 marks each)
Include a copy of the incident report including the date of the report (one or two pages) (2 marks each incident).
Your report should be approximately two pages.
30 July 2001 – ANZ
internet banking website had it’s DNS poisoned in one of Telstra’s main DNS
caches, for some time Internet traffic destined to www.anz.com.au was
redirected to an apache server located in Taiwan. I’m currently having trouble
souring information of the event some details presented in the SAGE-AU[1] mailing
list.
The main Telstra nameserver unneeda was poisoned within correct cached information about the www.anz.com.au IP record. While the redirected site wasn’t a spoofed version of the real Internet banking site, I may have been intended to be as the default apache ‘just installed’ content was displayed. It could have easily been a MX record and the attacker could have collected lots of business imperative e-mail communications. The impact on ANZ internet banking business must have been minimal as this event did not get reported in the media, this is most likely because users from other ISPs were able to use the service as normal – hence just another hiccup with Telstra BigPond services.
While details of
how the DNS was poisoning where never were made public, the most likely
possibilities is that a spiteful user has injected a fake authoritative
response packet to unneeda – utilising IP spoofing and the guessing of the
original query-id (which were incrementally allocated). An alternate
possibility is that a rogue DNS server sent a malicious reply to a valid query to the authoritative name server,
which accepted the reply packet and caches that bogus information. New versions
of BIND, the most common application used to provide DNS has had fixes that
stops either of these attacks.
When a an attacker
populates a DNS server with malicious information, it can give the attacker a
great deal of control of where on the Internet any client process would direct
communications. It is commonly viewed by the IT security community that DNS is
not secure – those who require security build security mechanisms on top of the
DNS, such as SSL. SSL that is used to provide server authentication and
confidentially, has problems with implementation in the client software – most
users disable those pesky pop-up warning windows. This is a naive view; there
are real threats to security as well as to the organisations commercial
repletion.
A recently reported and now fixed Microsoft Internet Explorer bug allowed various attackers to ‘bypass’ the complete certification path of the server certificate presented to the client. In fact this could allow an attacker true trick the client that it is a valid site certificate. Even without the possession of the server certificate such a spoof on this site effects the clients perception of the companies capabilities to protect their assets.
While there is no
complete solution to DNS spoofing, the following are some ideas that should
minimise the likelihood.
[1] Chris Cason, 2001, SAGE-AU Mailing list, http://lists.sage-au.org.au/pipermail/sage-au/2001-July/019568.html.
On the 31st
July 2002, the OpenSSH official distribution source contained Trojan code. This
event occurred. Sometime between 26th and 31st of July an
additional file was added to the OpenSSH source distribution, a piece of code
that set up a possible IRC session to a server somewhere in Melbourne,
Australia[2][3].
I’m unable to get
any information about how the Trojan code got into the distribution. Since the
OpenSSH code is mirrored in numerous locations and the Trojaned code was code
on most (possibility all) servers before it was spotted. The code must have had
inserted into the source ‘tarball’ on the main ftp site. How that code was
added to the system is not published – it could have been a remote exploit of
the hosting system via ftp or CVS.
We know that the
Trojan code was distributed to the Internet, what is also known (and why it was
spotted) is that the malicious user didn’t have access to the official PGP
signing key.
The repetition of
the OpenSSH code base has suffered a large blow in the eyes of the corporate
world, but the normal advocating user base has with modesty indicated that’s
what distribution signatures are for. Among either user base the number of
users that really check these signatures has increased.
The OpenSSH
development team, need to reassure the open source community that this was ‘a
one off’, and whatever methodology that was utilised to place the Trojan into
the ‘tarball’ needs disclosed and fixed, either logically or procedural.
Yes, I still use
OpenSSH, but then again I’ve been checking the PGP signatures of every CERT
alert issued since the mid 1990s. Do I have misplaced trust?
[2] OpenSSH
Project Team, 2002, OpenSSH Security Advisory 1st August 2002,
http://www.openssh.org/txt/trojan.adv.
[3] Edwin Groothuis,
2002, OpenSSH Trojaned (Version 3.4p1), http://www.mavetju.org/unix/openssh-trojan.php.
Question
4 (20 marks)
An increasing number of home users and small businesses are connecting to the Internet. Many of these users have very little or no technical knowledge of how the Internet works and what kind of security risks may be present when using the Internet.
(i)
Discuss the security risks that home
users are exposed to when connected to the Internet. (1 page) (10 marks)
(ii)
Describe at least two possible forms of
defence against such risks, and how the defensive measure addresses the
risk/s. (1 page) (10 marks)
For this question it is not necessary to address the risks associated with purchasing products via the Internet, just address the risks for the user’s computer and associated information assets.
Your report should be approximately two pages.
Traditionally
specialist IT administrators managed computer networks. With the advent of the
commercialisation of the Internet, there is the desire for most PC home users
to connect to the Internet. For these users the Internet is a tool, a
communications device; basic IT knowledge should be the only requirement. While
advances in system and application design has hidden and simplified the
underlying technical requirements to setup and run these newly connected
systems, a number of new security risks have started emerging.
The primary risks
to a home user are to personal privacy, identity theft, maleficent programs or
active content, software maintenance and misconfiguration, and the likelihood
of becoming an intermediary to other network attacks.
Privacy issues are
perhaps one of the most obvious risks to home based Internet users. Tracking of
web usage and treads, via cookies and web-bugs is common. Addionally third
party applications can also report user activity – comet cursor is example of
feature enhancing software that could also be considered spyware. The
Department of Treasury’s ConsumerPing software has been
slandered with that label[7][8]. The latest EULA’s[9]
from Microsoft effective provide for the company to scan and download software
onto the agreed system without notice or additional knowledge.
The possibility of your Credit Card number being observed during a transaction on the Internet is very well known, but the possibilities of electronic identity theft are not well publicised. With the advent of ‘helpful’ caching of passwords within web browsers, and electronic wallets; even cryptographically secured information is easy to obtain. Unlike a physical wallet or building pass, the user could be blissfully unaware of any perfectly duplicated copies. There is also the trivial possibility that spiteful individual could create and send fake or manipulated e-mail messages as either sender or recipient.
What is universally considered to be the major concern for any computer is the existence of computer viruses and Trojans. These malicious pieces of code are generally designed to be extremely virulent and mostly destructive – they also receive a lot of publicity in the mass media. Users have been educated to scan for viruses in attachments received in e-mail or downloaded from the net. With the advent of more active websites, additional delivery mechanisms are also becoming available such as client-side processed JavaScript, Java, and ActiveX code – likely without the knowledge by the end-user.
Software vulnerabilities
are found all the time, hopefully these disclosed to the vendor and a patch is
produced. With the increased software base on most computers the time and
effort to maintain the applications and operating systems is too time consuming
to be practical. Timely patching of systems is becoming vital. In Addition the
misconfiguration of system resources could also lead to unforseen
vulnerabilities.
Lastly, if
malicious backdoor code with perhaps Denial of Service abilities was utilised
on the home users system to attack yet another system, the home based user
could be liable for damages.
Anti-virual software is a must these days,
large organisations have been sued for being negligent. As are
threats to companies that suffered from the CodeRed worm, which exploited a
known, patched vulnerability[10].
Software verndors
are encouranging users to download new softrware or updates over the Internet.
Many are employing various forms to authenticate their software to the client,
for example the Microsoft’s Windows Update built into most of the latest
versions of windows uses authenticode signing. While maintaining the operating
system and applications in a timely manner is an entesntial time, even the
signing certificate could be in the hands of a hacker[11][12].
With the growing
size of the Internet, the source of danger or threats from the Internet is
always increasing. The potential number of hackers or inquisitive users is
increasing, and the number of known (and unknown) vulnerabilities is
increasing. Thus the security risks are also increasing and the number of
controls or countermeasures is also increasing hopeful at a similar rate.
While writing this
assignment, I received a spam e-mail message in my Outlook2000 client, which
almost happily installed a ‘interesting’ client application on my workstation.
The receiving mail relay system has well maintained and regularly updated
anti-spam and anti-relay features. The workstation has the latest anti-viral
software and patterns, and a personal firewall, all tuned beyond a standard
(shrink-wrap) configuration. The only reason that I identified the event is
that I had commercial Host based IDS software installed which was triggered by
unknown accesses to the systems registry. The average user would not have the
experience and resources to be able to understand or customise such controls.
[7] Greg Lehey,
2002, LinuxSA Mailing list,
http://www.linuxsa.org.au/mailing-list/2002-06/521.html
[8] Viveka Weiley Karmanaut, 2002, link Mailing list, http://www.anu.edu.au/mail-archives/link/link0207/0542.html
[9] Andrew
Orlowski, 2002, Microsoft EULA asks for root rights – again, The Register, http://www.theregister.co.uk/content/archive/26517.html
[10] Robyn Weisman, 2001, Got a Virus? You’re Sued!,
NewsFactor Network, http://www.newsfactor.com/perl/story/12529.html
[11] VeriSign
Inc., 2001, VeriSign Security Alert
Fraud Detected in Authenticode Code Signing Certificates March 22 2001,
http://www.verisign.com/developer/notice/authenticode/index.html
Question 5 (20 marks)
Network security vulnerabilities are sometimes discovered by independent researchers (not working for vendors). Most vendors would prefer that vulnerabilities were reported to them rather than being made public on an Internet forum such as Bugtraq, so that the vendors have time to analyse the vulnerability and produce a solution (if required).
Some researchers get frustrated that vendors may not take them seriously, or appear to be taking a long time to notify the public of the vulnerability and a suggested solution. Sometimes such researchers publish details of the vulnerability before vendors have released their own advice. An example that may assist you is the recent widespread vulnerabilities in the Simple Network Management Protocol (SNMP) used by many vendors.
In two pages, discuss your
opinion on the issue and recommend a solution for this problem.
In this age of swink wrap and non-stress tested engineered software, attention to software vulnerablies is low on the vendors and customers priority list. The customer demands easy functionality above all else – software is just a commodity. With the shift from mainframe systems to the PC platform for business critical functions along with Internet connectivity, demain for secure and robust software is increasing. The home user is also quickly becoming dissatisfied with the quality of the software available and how easy it is for malicious individuals or processes to harm their data. One way of improving this situation is vulnerably disclosure.
It is generally observed that three models of vulnerability disclosure exist. They have been described[13] as the Hacker “Exploitation Model”, the Corporate “Limited Disclosure Model”, and the Security Professional “Responsible Disclosure Model”.
The exploitation or ‘anarchy’ model is the most media attracting (hackers that are perceived to be able to access anything). Disclosures, including active exploits are posted to public forums without any forewarning; sometimes this is a newly discovered vulnerability or it has been circulating the underground community for some unmeasurable period.
When details are released panic and anarchy can be knowingly or unintentionally unleased. The vendor is most likely unaware of the vulnerability and now needs to provide an appropriate ‘fix’ in a reasonable time; while system administrators need to apply some kind of control to minimise the impact; and the script kiddies have a new toy.
The corporate non-disclosure licensing agreement or ‘limited’ model, is the traditional vendor approach, it aims to minimise bad publicity for the vendor. The big software firms utilise this methodology all the time. Usually somebody reports vulnerability in the vendors’ software, and then they are reminded of their software licensing agreement, which basically ties them to a information moratorium. The vendor is then basically free to either find a fix or do nothing, without ever informing the software using public.
The responsible disclosure or ‘self-regulation’ model is an industry-based policy, which is ethically best practice; it is based on the common good for everyone approach. Such an approach addresses the needs for vendors to prepare fixes, and for the user community to be informed that such an issue exists.
The process should give vendors an opportunity to provide a ‘fix’ to their code, to be followed by a public disclosure of the vulnerability within a reasonable timeframe – but not disclosure of the exploit code. Lastly to endeavour to keep vendors accountable by publishing some kind of performance measure based on timeliness and cooperation with the co-ordination centre. An independent industry body should oversee or co-ordinate the process. It should also be stressed that these investigative reports shouldn’t be vendor vendettas, but be professionally responsible (fair and honest) intelligence.
The Internet Engineering Task Force has produced such a code of practice as an Internet Draft[14]. It outlines 6 goals of responsible disclosure, along with 7 phases of responsible disclosure. Goal (3) is of particular interest:
“Provide customers with sufficient information for them to evaluate the level of security in vendors' products.”
The co-ordination role currently already exists in the form of the CERT Coordination Center[15], although no vendor performance measures are being published. While there will be no way to stop vulnerability and exploitation information being disclosed in ad-hoc methods, hopefully a tread will develop that will encourage ethically responsible disclosure. This would be more achievable if a measure of the vendors’ commitment can be made available to all.
[13] Michael Morgenstern, Tom Parker, 2002, The Realities of Disclosure, SecurityFocus Guest Feature – July 12 2002, SecurityFocus, http://online.securityfocus.com/guest/14155
[14] Steve Christey, Chris Wysopal, 2002, Responsible Vulnerability Disclosure Processs, Internet-Draft, Internet Engineering Task Force, The Internet Society, http://www.ietf.org/internet-drafts/draft-christey-wysopal-vuln-disclosure-00.txt
[15] CERT, 2000, The CERT/CC Vulnerability Disclosure Policy, CERT Coordination Center, http://www.kb.cert.org/vuls/html/disclosure