Sunday 6 January 2013

Defenses against dictionary attacks

Salting of password hashes defeats offline dictionary attacks based on precomputation, and thus foils our hybrid attack.

Using an inefficient cipher slows the attacker down by a constant factor, and this is in
fact done in the UNIX crypt() implementation. This technique, however, can only yield
a limited benefit because of the range of platforms that the client may be running.
Javascript implementations in some browsers, for example, are extremely slow. To
improve password security and concluded that the only technique offering a
substantial long term improvement is for users to increase the entropy of the
passwords they generate.

There is also a large body of work, subsequent to the above survey, on password
authenticated cryptographic protocols and session key generation from human
memorable passwords.The objective of these protocols is to defeat offline dictionary
attacks on protocols where participants share a low-entropy secret. One drawback
of password-authenticated key exchange (PAKE) protocols is that they typically
rely on unrealistic assumptions such as multiple noncooperating servers or both
parties storing the password in plaintext (one exception is the PAK-X protocol).
Storing client passwords on the server is very dangerous in practice, yet even for
“provably secure” PAKE protocols, security proofs implicitly assume that the
server cannot be compromised.

Furthermore, its attacks apply in a limited sense even to PAKE protocols
protocols because of Markovian filters also make online dictionary attacks
much faster. Thus, our attacks call into our question whether it is ever
meaningful for humans to generate their own character-sequence passwords.
The situation can only become worse with time because hardware power grows
exponentially while human information processing capacity stays constant.
Considering that there is a fundamental conflict between memorability and high
subjective randomness, our work could have implications for the viability of
passwords as an authentication mechanism in the long run.

Thursday 27 December 2012

DHCP Servers

Initially the DHCP servers were intended to be part of the solution. The idea was to use
the DHCP server as initiator of the updates. Since this would require control over the
DHCP server, the mobility would be limited to networks under the direct control of
the solution. As the design goal was to allow mobility over the entire Internet, this
solution was abandoned about halfway through the project. However, the DHCP
servers were kept for testing purposes throughout the project.

The DHCP servers were configured as a master and a slave server. If the slave has not
gotten any signal from the master for more than 30 seconds, it assumes the master has
failed and the slave takes over. This setup ensures that if only one server fails the
clients can still get a valid IP address in a network known to the routers. The master
and slave splits the addresses of the different subnets between them to avoid address
collisions. If one server goes down, the other server takes over the responsibilities
for the other server’s addresses. This operation is reversed when the faulty server
comes online again.

Tuesday 25 December 2012

Bottom-Boot and Top-Boot Flash Devices

Some devices are organized as a few small sectors at the bottom address space,
followed by large sectors that fill the remaining space. Some devices have a few small
sectors at the top of the device’s address space and use large sectors in the lower
address space. Since boot code is typically placed in small sectors, flash memory is
some times described as bottom-boot or top-boot, depending on where the smaller
sectors are located.

Ultimately, one sector contains the memory space that the CPU accesses as a
result of a power cycle or reset (boot). This sector is usually referred to as the boot
sector. Because some CPUs have a reset vector that is at the top of memory space and
others have a reset vector at the bottom of memory space, the flash devices come in
bottom-boot and top-boot flavors. A processor that boots to the top of memory
space would probably use a top-boot device, and a processor that boots to the bottom
of its memory space would be more suited to a bottom-boot device.

When the boot sector of the flash device exists in the boot-time address space of
the CPU, it can be protected by making the boot sectors unmodifiable. Since only a
small amount of code is typically needed to provide a basic boot, there is little
wasted space if you dedicate a small sector to an unmodifiable boot routine. This
supports the ability to make as much of the flash space in-system reprogrammable
without losing the security of knowing that if all of the reprogrammable flash was
accidentally corrupted, the system would still be able to boot through the small
amount of code in the unmodifiable boot sector.

Sunday 11 November 2012

Who are ethical hackers?

Successful ethical hackers possess a variety of skills. First and foremost, they must be
completely trustworthy. While testing the security of a client’s systems, the ethical
hacker may discover information about the client that should remain secret. In many
cases, this information, if publicized, could lead to real intruders breaking into the
systems, possibly leading to financial losses. During an evaluation, the ethical hacker
often holds the “keys to the company,” and therefore must be trusted to exercise tight
control over any information about a target that could be misused. The sensitivity of
the information gathered during an evaluation requires that strong measures be taken
to ensure the security of the systems being employed by the ethical hackers
themselves: limited-access labs with physical security protection and full ceiling-to
-floor walls, multiple secure Internet connections, a safe to hold paper documentation
from clients, strong cryptography to protect electronic results, and isolated networks
for testing.

Ethical hackers typically have very strong programming and computer networking
skills and have been in the computer and networking business for several years.
They are also adept at installing and maintaining systems that use the more popular
operating systems (e.g., UNIX** or Windows NT**) used on target systems. These
base skills are augmented with detailed knowledge of the hardware and software
provided by the more popular computer and networking hardware vendors. It should
be noted that an additional specialization in security is not always necessary, as strong
skills in the other areas imply a very good understanding of how the security on
various systems is maintained. These systems management skills are necessary for the
actual vulnerability testing, but are equally important when preparing the report for
the client after the test.

Finally, good candidates for ethical hacking have more drive and patience than most
people. Unlike the way someone breaks into a computer in the movies the work that
ethical hackers do demands a lot of time and persistence. This is a critical trait, since
criminal hackers are known to be extremely patient and willing to monitor systems for
days or weeks while waiting for an opportunity. A typical evaluation may require
several days of tedious work that is difficult to automate. Some portions of the
evaluations must be done outside of normal working hours to avoid interfering with
production at “live” targets or to simulate the timing of a real attack. When they
encounter a system with which they are unfamiliar, ethical hackers will spend the
time to learn about the system and try to find its weaknesses. Finally, keeping up
with the ever-changing world of computer and network security requires continuous
education and review.

One might observe that the skills we have described could just as easily belong to a
criminal hacker as to an ethical hacker. Just as in sports or warfare, knowledge of
the skills and techniques of your opponent is vital to your success. In the computer
security realm, the ethical hacker’s task is the harder one. With traditional crime
anyone can become a shoplifter, graffiti artist, or a mugger. Their potential targets
are usually easy to identify and tend to be localized. The local law enforcement
agents must know how the criminals ply their trade and how to stop them. On the
Internet anyone can download criminal hacker tools and use them to attempt to
break into computers anywhere in the world. Ethical hackers have to know the
techniques of the criminal hackers, how their activities might be detected, and how
to stop them.

Given these qualifications, how does one go about finding such individuals? The best
ethical hacker candidates will have successfully published research papers or released
popular open-source security software. The computer security community is strongly
self-policing, given the importance of its work. Most ethical hackers, and many of the
better computer and network security experts, did not set out to focus on these issues.
Most of them were computer users from various disciplines, such as astronomy and
physics, mathematics, computer science, philosophy, or liberal arts, who took it
personally when someone disrupted their work with a hack.

Thursday 16 August 2012

Assessing the impact of a design decision


Design agents evaluate the consequences of their decisions by inferring ahead
as to whether the decision value will satisfy constraints or support goals. In doing
so, it is likely that some of the information required in the inference process is not
yet available, and therefore the agent will attempt to substitute for it with an
expectation.

Figure 1. Design expectation example

Imagine the frame design agent, in our chair design problem, making a decision
about the frame material. Before committing to the design decision the agent
may verify whether the decision will satisfy cost constraints. Therefore it will
need to know the conditions that influence the cost, and the specific correlations
between the values for the determined conditions and the cost ranges. An expectation
such as the one described in Figure 1 could be critical in validating the
agent’s decision before all the cost components are known. Alternatively, the
frame design agent may take a decision which is perfectly valid at that point, that
will be used by other agents, only to be later invalidated in a cost analysis process.

Thursday 2 August 2012

Data Hoarding

Data hoarding releases currency, therefore changes object semantics. A receiver
hoards replicas from a sender and each replica is denoted as (objectID, semantics),
where semantics is either primary or copy.

General hoarding (G-hoarding): sender owns the primary of an object, i.e.
(objectID, primary), and receiver hoards a copy from the sender. After G-hoarding,
sender still owns the primary, i.e. sender (objectID, primary), and receiver
(objectID, copy).

Primary hoarding (P-hoarding): sender owns the primary of the object, i.e. (objectID,
primary), and receiver hoards the primary from the sender. After P-hoarding, the
primary is transferred from the sender to the receiver and the one in the sender
becomes a copy, i.e. sender (objectID, copy), and receiver (objectID, primary).

Copy hoarding (C-hoarding): sender owns only the copy of the object, i.e. (objectID,
copy), and receiver hoards a copy from the sender. After C-hoarding, both the sender
and the receiver hold copies, i.e. sender (objectID, copy), and receiver (objectID, copy).

Sunday 29 July 2012

Bandwidth Estimation on Protocol Mechanisms

A simple mechanism to measure the available bandwidth on a link is the packet-pair
method. It entails sending two packets back-to-back on a link, and measuring the
inter-arrival time of those packets at the receiver. If the packets are sent on a
point-to-point link with no other traffic, the inter-arrival time measures the raw
bandwidth of the link for that size of packets. It is the absolute minimum period at
which packets of that size can be sent. Sending packets at a smaller spacing will
only queue packets at the outbound interface, with no increase in throughput. If the
packets are sent on a multiple hop path mixed with other traffic, routers on the way
may insert other packets between the two packets that were sent back-to-back,
making them arrive farther apart. The number of packets inserted is directly
proportional to the load on the outbound port each router uses to send the packets,
and does not depend on packet size if no fragmentation occurs, as time in the
routers is normally bound by protocol processing and not packet size. If packet size
is equal to the path MTU, the inter-arrival time measured at the receiver is a snapshot
of the bandwidth of the path. The interarrival time is the minimum period at which
packets can be sent that will not create a queue in any of the routers on the path.
If the load of all routers in the path is a constant, then the inverse of the inter-arrival
time defines the optimal rate to send packets through this link. The load not being a
constant, the measurement will have to be repeated from time to time to adjust the 
rate to the current conditions.