Design  | SFC  |  NOS  |  Code |  Diary  | WWW |  Essay |  Cisco | [Home]



[Bill's Home Page]
[ Old page]


"We build systems like the Wright bothers built airplanes - build the whole thing, push it off the cliff, let it crash, and start over again"
Software researcher on software development, 1968.
The first large-scale computer system contained over 19,000 and was called ENIAC (Electronic Numerical Integrator and Computer). It was so successful that it ran for over 11 years before it was switched off (not many modern day computers will run for more than a few years before they are considered unusable). By today's standards, though, it was a lumbering dinosaur, and by the time it was dismantled it weighed over 30 tons and spread itself over 1,500 square feet. Amazingly, it also consumed over 25kW of electrical power (equivalent to the power of over 400 60W light bulbs), but could perform over 100,000 calculations per second (which, even by today's standards, is reasonable). Unfortunately, it was unreliable, and would work only for a few hours, on average, before an electronic valve needed to be replaced. Faultfinding, though, was much easier in those days, as a valve that was not working would not glow, and would be cold to touch.
Mastering Computing, W.Buchanan, Palgrave.
Isn't that interesting?








Agents: Friend or Foe?

o what benefit will agents bring, and are they worth the trouble? Well they're worth it as they allow us to migrate processing away from busy servers towards the client. The allow tend to carry out their tasks quietly and can turn raw data into a form that the server can quickly use. But, aren't we leaving ourselves open to a new wave of super viruses, in the form of undercover agents and undercover servers. These little agent programs work for the other side (the hacker) and can pass on sensitive information to others. The Internet now allows for a convenient path for untrusted agents to travel. So how can we stop this, well the only real way of to agents to authenticate themselves to the server, and vice versa. This will involve some form data encryption, possible using a secret key.  Agents cannot also be allows to roam wherever they want, thus there must be some mechanism for providing homes for agents to live, which only allows valid agents to live in the home.

Congratulations I am really impressed. Your web site has given me the motivation that I have missed over the past two years of pushing buttons on my multimedia course at my so-called prestigous university. ...


Comment on Three Generations essay, Sun 01/07/2001 3:15 PM

The big worry is the use of agents to breach civil liberties. What's to stop a government agency from downloading an agent to your home computer, which then monitors every event within the computer, and finds out the contents of all of your emails? It possible, and there are many commercial programs which will scan a computer looking for available ports to connect into. Once attached to the computer there is little to stop a downloaded program from gain access to all the resources of the computer. At present Java programs which run from WWW pages are protected against this type of attack, and only allow minimum access to local resources.

So beware, the cleanest attack on a system is through the TCP/IP stack. If these is tampered with it can allow for programs to be run which open up local ports which can be connected into whenever the user logs into the Internet (or even any network).

So what the next logical step in client-server networks: agent technology, and what's the next natural step in agent technology: mobile agents. These helpful little agents like to work independently. They are dispatched to clients, and then work quietly gathering information, and sending it back whenever required, or whenever a user connects back onto the Internet. They are thus extremely useful when users are also mobile, and use notebook computers to perform their business.

So what about security? Passwords and login IDs are a terrible method of securing a system. They provide little protection against external hackers. An improved method for a server to scan an audit log file for the user and determine their typical usage (their user profile). An agent can then be dispatched to the computer which the user is using, which then checks to see if the user is operating as they usually do. If they do not, the agent can alert the server that there is a possible breach, or that the user may be acting in an usual manner (typically the first signs of a fraud). An example would be if a user started to type at 70wpm where before they used chop-sticks to type their commands. Agents can also be dispatched with a specification of the restrictions that a user can operate within, such as which programs are allowed to be executed, which resources they are allowed, and so on. The agent would then not allow any access outside these limits. All of this allows for less processing for the server, and allows for a fine tuning of user rights to resources.

Comments on this essay

If you've got any comments on this essay (no matter if you agree or disagree with it), please send them to me using the form below.


Name (Surname, First Name):

[Not required]


Your comment

Note your comments may be published to a comments pages, but none of your details will be added to the comment, just the date that it was sent. Thank you.