📕 Node [[on the user agent]]
📄 On-the-user-agent.md by @enki

On the user agent

‘User agent’ is a great idea that has been weirdly perverted.


On the user agent

‘User agent’ is a great idea that has been weirdly perverted.

Nobody these days (even highly technical people) has a user agent. (Maybe The Doctor does.)

A user agent is a piece of software controlled by the user , that performs the automatic tasks the user has instructed it to. It communicates with other user agents, automatically, on the user’s behalf.

Today, the term ‘user agent’ means ‘long, misleading browser-lineage- identification string’. It identifies one of ~3 corporations.

Imagine if we actually had user agents.

Like, imagine if our computers were doing things we wanted them to do, automatically, on the network. And, it was our computers doing these things, instead of a rental service like IFTTT or google alerts that’s selling info on the back end. Imagine if they stopped doing things when we told them to stop.

Imagine if non-technical users had this too.

The most important thing about a user agent is that they keep on doing things on the user’s behalf when the user isn’t there. This is something that end users aren’t used to: that they can control an automated process — that this process is theirs , rather than a service that a company is providing out of its own self-interest (inevitably partially if not wholly misaligned with the user’s).

Uncommon wisdom from 1976 (Rand Intelligent Terminal Agent (RITA): Design and Philosophy)

Another important part of the idea of a user agent is that the user agent is one of a team of software agents, all communicating together. The user agent provides automation, planning, and control on behalf of the user. It enforces the user’s preferences, reports on the behavior of other agents, performs maintenance tasks, and acts as a translator between the user and individual special-purpose agents. A user agent can sit in the grey area between total novice & super-hacker, and make more advanced features more accessible simply by making automation accessible.

The ‘voice assistant’ has taken over the role that user agents used to play in ideas about possible future technologies. There’s no particular reason an Alexa-type system needs to run on Amazon hardware, or needs to be connected to services running somewhere else. Stick a raspberry pi in a tube, run sphinx + flite for speech recognition & speech synthesis, have a non- centrally-controlled repo of ‘skills’, make it so it can do lots of things without even being hooked up to the internet. (This is basically what Mycroft already does.)

Voice assistants, as they stand now, are direct interfaces to third-party skills systems. This means they are not user agents: they cannot automate & control all skills, connect skills together, enforce preferences, or perform complex automatic decision-making; they are voice shells, and control is handed over completely to the skill software when a skill is invoked. This makes skills harder to write & less flexible, and it means that the onus of understanding how to use a skill falls on the user. We have known how to write planners & expert systems since the 1970s, & so for non-technical users, the normal interface for anything that calls itself an ‘assistant’ should be a planner system with an existing expert system (understanding things like scheduling) & pluggable knowledge modules about the behavior, side effects, and API of individual skills.

(Adapted from this thread)

By John Ohno on January 5, 2019.

Canonical link

Exported from Medium on September 18, 2020.

Loading pushes...

Rendering context...