the usable computer
When we create a program, the 'meaning' of the program,
or any part of it, is not 'in the computer', and it's
not 'in the program'.
The meaning of the program constructs -- large and small,
new and re-used and parameterized, which we provide to
the machine -- are in the minds of human beings. The
resulting operation is based on agreements among people
to use certain words and ideas to describe what happens
in the machine so 'instructed'.
The same is true with the intention or purpose behind
and part of our software. It is not in the software. At
best, the software is an artfifact that attempts to act
as a kind of demonstrative explanation of your intent.
The computer and the program have no logical connection
to your ideas. The ideas are between you and other people.
The computer is a kind of communication medium.
More than that, when we program, the effects of the
program on ourselves, on the machine, and on the
intended user, quickly become such complex artifacts
in the real world, that their impact and effect are
very distant from any kind of explicit understanding.
One reason for this, is that the mental systems our
artifact interacts with are very poorly understood.
And, in any case, we are not trying to communicate
with an explicit model of a person. We're trying to
interact with a person, using computing as a medium.
That doesn't mean that we shouldn't work hard to make
the intention and assumptions of our programs as explicit
and clear as possible, to ourselves and other humans.
Of course we should. But note that this is as
artistic-scientific effort: we're trying to be
precise and accurate but also evocative and inspiring,
because we cannot be complete. There is no 'proving'
that this program represents our intentions.
And so we need to rely on our natural abilities as
human beings to make the artifact, the program, its
operation, and its effect, humanly acceptable.
We need to use our humanity to make the program humane,
and this makes it as 'correct', given our hopefully
benign intentions, as possible.
To imagine that there is any other 'provable' criteria
for 'correctness' is to misunderstand what we do
when we create things. In all cases, we are creating
things with far more complex effects than can be found
within a written program, and far more complex than
CAN be expressed symbolically, possibly ever, but
certainly at our present state of understanding about
the human mind. The symbols are only labels for very
complex ideas in our minds, which cannot be found in
machines in any form.
But a program needs to be understood by people. It also
hopefully augments some positive human goal, by playing
the role of a tool for the user, rather than the role of
a proxy enforcer, whipping the user into the role of tool
for the computer.
People need to enabled by the user interface, if the users are
engineers and programmers, doing programming amidst formally
defined systems, or for users who are less formally inclined.
Software development environments need to become more
natural, comfortable, playful, and friendly. That will
put the engineer-designer into the right frame of mind
to create sensitive interfaces and positive human-machine
interactions for non-engineers.
That's the doctrine. On this site, I will present, as best
as I can, principles for humane design of these
human-machine interactions.
Greg Bryant