Excerpt 1

Alan Kay is still waiting for his dream to come true (fastcompany.com) 337 points by sohkamyung on Sept 16, 2017 | hide | past | favorite | 288 comments

alankay on Sept 17, 2017 [–]

Let me try to help this community regarding this article by providing some context. First, you need to realize that in the more than 50 years of my career I have always waited to be asked: Every paper, talk, and interview has been invited, never solicited. But there is a body of results from these that do put forward my opinions. This article was a surprise, because the interview was a few years ago for a book the interviewer was writing. It's worth noting that nowhere in the actual interview did I advocate going back and doing a Dynabook. My comments are mostly about media and why it's important to understand and design well any medium that will be spread and used en-mass. If you looked closely, then you would have noticed the big difference between the interview and the front matter. For example, I'm not still waiting for my dream to come true. You need to be sophisticated enough to see that this is a headline written to attract. It has nothing to do with what I said. And, if you looked closely, you might note a non seq right in the beginning, from “you want to see old media?” to no followup. This is because that section was taken from the chapter of the book but then edited by others. The first version of the article said I was fired from Apple, but it was Steve who was fired, and some editor misunderstood. In the interview itself there are transcription mistakes that were not found and corrected. And of course they didn't send me article ahead of time (I could have fixed all this). I think I would only have made a few parts of the interview a little more understandable. It's raw, but – once you understand the criticisms – I think most will find them quite valid. In the interview – to say it again – I'm not calling for the field to implement my vision – but I am calling for the field to have a larger vision of civilization and how mass media and our tools contribute or detract from it. Thoreau had a good line. He said “We become the tools of our tools” – meaning, be very careful when you design tools and put them out for use. (Our consumer based technology field is not being at all careful.)

Tossrock on Sept 17, 2017 [–]

Professor Kay, it's an honor to see you taking the time to provide context to this interview. For what it's worth, I was skimming the article until I saw your comments on human universals, and that made me sit up and take notice. From that point on, it was pretty clear to me you were sounding a note of alarm about the addictive quality of our current consumer electronics industry, and the wasted potential of them as pedagogic devices.

austenallred on Sept 17, 2017 [–]

Wow, this article is a pretty egregious misinterpretation of those points. I didn't come away with anything even vaguely resembling what you just said, even if we disregard the factual errors. Candidly, your comments here make a lot more sense than some “We haven't yet built the DynaBook the way we should” piece.

alankay on Sept 17, 2017 [–]

People often ask me “Is this a Dynabook, is that a Dynabook?”. Only about 5% of the idea was in packaging (and there were 2 other different packages contemplated in 1968 besides the tablet idea – the HMD idea from Ivan, and the ubiquitous idea from Nicholas). Almost all the thought I did was based on what had already been done in the ARPA community – rendered as “services” – and resculpted for the world of the child. It was all quite straightforwardly about what Steve later called “Wheels for the Mind”. If people are interested to see part of what we had in mind, a few of us including Dan Ingalls a few years ago revived a version of the Xerox Parc software from 1978 that Xerox had thrown away (it was rescued from a trash heap). This system was the vintage that Steve Jobs saw the next year when he visited in 1979. I used this system to make a presentation for a Ted Nelson tribute. It should start at 2:15. See what you think about what happens around 9:05. https://youtu.be/AnrlSqtpOkw?t=135 Next year will be the 50 anniversary of this idea, and many things have happened since then, so it would be crazy to hark back to a set of ideas that were in the context of being able to be built over 10 years, and would be ridiculous if we didn't have in 30 (that would be 1998, almost 20 years ago). The large idea of ARPA/PARC was that desirable futures for humanity will require many difficult things to be learned beyond reading and writing and a few sums. If “equal rights” is to mean something over the entire planet, this will be very difficult. If we are to be able to deal with the whole planet as a complex system of which we complex systems are parts, then we'll have to learn a lot of things that our genetics are not particularly well set up for. We can't say “well, most people aren't interested in stuff like this” because we want them to be voting citizens as part of their rights, and this means that a large part of their education needs to be learning how to argue in ways that make progress rather than just trying to win. This will require considerable knowledge and context. The people who do say “well, most people aren't interested in stuff like this” are missing the world that we are in, and putting convenience and money making ahead of progress and even survival. That was crazy 50 years ago, and should be even more apparently crazy now. We are set up genetically to learn the environment/culture around us. If we have media that seems to our nervous systems as an environment, we will try to learn those ways of thinking and doing, and even our conception of reality. We can't let the commercial lure of “legal drugs” in the form of media and other forms put us into a narcotic sleep when we need to be tending and building our gardens. The good news about “media as environment” was what attracted a lot of us 50 years ago – that is, that making great environments/cultures will also be readily learned by our nervous systems. That was one of Montessori's great ideas, and one of McLuhan's great ideas, and it's a great idea we need to understand. There aren't any parents around to take care of childish adults. We are it. So we need to grow up and take responsibility for our world, and the first actions should be to try to understand our actual situations.

pls2halp on Sept 17, 2017 [–]

I'm interested to see if you've seen the concepts of atemporality[1] and network culture[2] floating around. Basically, the core thesis associated with these is that we have adopted the internet as our primary mode of processing information, and have as such lost the sense of a cohesive narrative that is inherent in reading a book/essay, or listening to a whole talk, in the process. You become fully immersed in Plato's worldview when reading The Republic, but if you were see someone explaining the allegory of the cave in the absence of a wider context you will only take the elements which fit your worldview and not his wider conception of knowledge. I think this ties into what happened to the concept of a centralised computer network, working for the good of humanity, turning into todays fractured silos, working to mine individuals for profit. [1]http://index.varnelis.net/network_culture/1_time_history_und… [2]http://index.varnelis.net/network_culture

alankay on Sept 17, 2017 [–]

Thanks for the references. I haven't seen these, but the ideas have been around since the mid-60s by virtue of a number of the researchers back then having delved into McLuhan, Mumford, Innis, etc. and applied the ideas to the contemplated revolutions in personal computing and networking media we were trying to invent. I think a big point here is that going to a network culture doesn't mandate losing narrative, it just makes the probability much higher for people who are not self-conscious about their surrounding contexts. If we take a look at Socrates (portrayed as an oral culture genius) complaining about writing – e.g. it removes the need to remember and so we will lose this, etc. – we also have to realize that we are reading this from a very good early writer in Plato. Both of them loved irony, and if we look at this from that point of view, Plato is actually saying “Guess what? If you -decide- to remember while reading then you get the best of both worlds – you'll get the stuff between your ears where it will do you the most good -and- you will have gotten it many times faster than listening, usually in a better form, and from many more resources than oral culture provides”. This was the idea in the 60s: provide something much better – and by the way it includes narrative and new possibilities for narrative – but then like reading and writing, teach what it is in the schools so that a pop culture version which misses most of the new powers is avoided.

mmiller on Sept 19, 2017 [–]

“There aren't any parents around to take care of childish adults. We are it. So we need to grow up and take responsibility for our world, and the first actions should be to try to understand our actual situations.” I am seeing the expectation that there will be parents around to take care of childish adults, though this has really come into prominence in the last 10 years, the last 3 years, in particular. For me, it's evoked notions of H. G. Wells's “Eloi.” If that sentiment moves forward unchanged, we won't get “parents” in reality, of course, but some perverted in loco parentis in society. I've heard hope expressed in some quarters that reality will provide some needed blows from some 2×4's across the head, once the young venture out into the world, but I wonder whether sheer numbers will decide this; whether the young will choose to reorient our society, in an attempt to please themselves, rather than being influenced by its experience. Re. “most people are not interested in this” From what I've seen, this excuse came out of a combination of the technical side of the “two cultures,” and the distraction of a lot of people becoming excited by some perceived new possibilities. More recently, my perspective has shifted to it coming out of a perverted notion of “self-esteem,” that challenge is “harmful,” because being contradicted creates a sense of limits, isolation or shame, or more materialistically, the fear of economic isolation, thereby reducing career prospects for something original. What's emerged is a desire to affirm one's self-image as “good,” regardless of notions of good works. This is where the “legal drugs” come in, reinforcing this. Neil Postman was right to fear this dynamic. Diverting off of what I'm saying here (though staying on your topic), have you looked at William Easterly's critique of how foreign aid is conducted? I think it dovetails nicely with what you're talking about, here. The short of it is that most aid efforts to the undeveloped world offer some form of short-term relief, but they don't address at all the political and economic issues that cause the problems the aid is trying to address in the first place. Secondly, when he's tried to confront the aid organizations about this, there is no interest in pursuing these matters, a version of “most people aren't interested in stuff like this.” Whether there's a sincere desire to solve problems, or just go through the motions to “help,” to make it appear like something is happening (ie. putting on a kind of show of compassion for public relations, and satisfying certain political goals), I don't know. It seems like the latter. Do you have any insight on what's causing the reticence to get into these matters? Easterly didn't seem to have answers for that, as I recall.

alankay on Sept 19, 2017 [–]

Closer to home – for example in the US public educational system – we have prime examples of your (and Easterly's) point about “band-aids” vs. “health”. After acknowledging how politics works, I think we can see other factors at work in those more genuinely interested in dealing with problems. Some of these are almost certainly (a) the idea that “doing something” is better than doing nothing (b) that “large things are harder than small things” © the lack of “systems consciousness” amongst most adults (d) pick a few more. The “it's a start” reply, which is often heard when criticizing actions in education which will get nowhere (or worse, dig the hole deeper) is part of several fallacies about “making progress”: the idea that “if we just iterate enough” we will get to the levels of improvement needed. Any biologist will point out that “Darwinian processes” don't optimize, they just find fits to the environment. So if the environment is weak you will get good fits that are weak. A “being more tough” way to think about this is what I've called in talks “the MacCready Sweet Spot” – it's the threshold above the “merely better” where something important is different. For example, consider reading scores. They can go up or down, but unless a kid gets over the threshold of “reading for meaning” rather than deciphering codes, none of the ups and downs below count. For a whole population, the US is generally under the needed threshold for reading, and that is the systemic problem that needs to be worked on (not raising the scores a few points). To stay on this example, we find studies that show it is very hard to learn to read fluently after we've learned oral language fluently. Montessori homed in on this earlier than most, and it has since been confirmed more rigorously. And this is the case for many new things that we need to get fluent at and above threshold. So at the systems level of thinking we should be putting enormous resources into reforming the elementary grades rather than trying to “fix” high schools. And so forth. This is the logic behind building dams and levees and installing pumps and runoff paths before flooding. One recent study indicated that the costs of prevention are 20% of the costs of disaster. We could add to (d) above the real difficulties humans have of imagining certain kinds of things: we have no trouble with imagining gods, demons, witches, etc. but can't get ourselves beforehand into the “go all out” state of mind we have during an actual disaster (where heroes show up from everywhere). The very same people mostly can't take action when there isn't a disaster right in front of them. This is very human. But, as I've pointed out elsewhere, part of “civilization” is to learn how to “do better than human” for hard to learn things.

mmiller on Sept 20, 2017 [–]

Hmm. So, it sounds like the same “keyhole” problem I've seen you talk about before (you used an AIDS epidemic as an example with this). What's seen is taken as “good enough,” because the small perspective seems large enough. If there are any frustrations or tragedies, they're taken as, “It goes with the territory. Just keep plugging away.” There's a parable I used to hear that I think plays right into this: Two people are walking along a beach, and they see an enormous field of starfish stranded ashore, and one of them starts throwing them, one by one, back into the sea. The other is watching, and says, “What's the point? You're not going to be able to save all of them.” The person doing the throwing holds up a starfish, and says, “I can save this one.” It's a nice thought, talking about good will and perseverance, and certainly the message shouldn't be, “Give up,” but I think it nicely illustrates the “keyhole” problem, because ideas like this lead people to believe that because they can see people who need help, even if the number is more than they can handle, and they're trying to help those people in the moment, that they're improving their lives in the long-term. That may not be true. I've seen you talk about the “MacCready Sweet Spot” in relation to the Apollo program. BTW, I first heard you talk about that in a web video from some congressional testimony you gave back in 1982, when Al Gore was Chairman of the Science and Technology Committee. When you said that the Apollo rockets were below threshold, not nearly good enough to advance space travel, and that the rockets were a kind of kludge, the camera was panning around the room, showing large posters of different NASA missions that had been hung up around the chamber. Gore said in jest, “The walls in this room are shaking!” I can imagine! When I first heard you say that, it struck me as so contrary to the emotional impact I had from understanding what was accomplished (I do think that landing on the Moon and returning safely to Earth was a mean feat, particularly when the U.S. couldn't get a rocket into space to save our lives 12 years earlier (I don't mean that literally)), but as I listened to you explain how the rocket was designed (450 ft., mostly high explosives, with room for only 3 astronauts, not to mention that the missions were for something like 9 days at a time. Three days to get there. Two, sometimes more days, on the surface, and then three days to get home), it occurred to me for the first time, “My gosh! He's right!” It really helped explain my disappointment at seeing us not get beyond low-Earth orbit for decades. For years, I thought it was just a lack of will. I've explained to people that when I was growing up in the '80s, I had this expectation drummed into me (willingly), as many people in my generation did, that we would see interplanetary travel, probably within our lifetimes, and in several generations, interstellar travel. It was very disappointing to see the Space Shuttle program cancelled with seemingly nothing beyond it on the horizon, and I think more importantly, no goals for anything beyond it that have been compelling. I heard you explain in a more recent presentation that this was a natural outcome of Apollo, that it set in motion something that had its own inertia to play out, but the end result is no one has any enthusiasm for space travel anymore, because the expectations have been set so low. The message being, “Beware of large efforts below threshold.” Indeed!

alankay on Sept 20, 2017 [–]

We are “story creatures” and it takes a lot of training and willpower to depart from “fond stories and beliefs” to “actually think things through”. That the moon shot was just a political gesture – and also relevant to ICBMs etc – was known to every scientist and most engineers who were willing to think about the problem for more than a few seconds. We hoped that the -romance- of the shot would lead to the very different kinds of technologies needed for real space travel (basically it's about MV = mv, and if you don't want to have to carry (and move) a lot of M, you have to have very high V (beyond what chemical reactions can produce). If you have to have a large M you use most of it to move just it! This has been known for more than 100 years. But the real romance and its implications didn't happen in the general public and politicians.

scroot on Sept 20, 2017 [–]

And the story we tell ourselves today is, frankly, a dismal one. It's that all of computing should be invented and put into the service of “the economy” rather than people. Instead of a culture of “computational literacy” in which human thought is extended to another level to the same effect as written literacy hundreds of years ago, we have an environment of complex technologies that cater to our most base evolutionary addictions and surveil us for profit. Our universities are no longer institutions where people learn how to think, but rather where they learn how to “do” – usually “doing” involves vocational practices that already exist, especially those that some manager (ie provost or dean) deems economically important. This is why you have generations of programmers bitching about type systems instead of the very politics, history, and social consequences of their own wares. We don't have funding like ARPA/IPTO anymore and the devices and software of our world show it. Everything is some iteration on ideas that came from that period, good or bad – iterations whose goal is always “efficiency” in some form. Our current political culture prevents big initiatives like this, because how on Earth would they benefit the economy in the short or medium term, the limits of our new horizon? Because these technologies have been created in service of an economic system that has proliferated social problems, they can never be a meaningful solution to those problems. Sure, we might invent some new systems for dealing with environmental catastrophe, but they are always predicated on the assumption that people should consume more and more. We are at the behest of billionaires – smart ones, mostly – who understand complex systems but also have an interest in ensuring that they remain complex. It is unlikely that we will achieve a new kind of transcendent way of computing until we change the way we think about politics and economics. That is our environment. That is the “fit” that our technical systems have, as you say.

mmiller on Sept 21, 2017 [–]

Great description of the problem (and great description of what we could have instead)! What came to mind as I read what you said here was a bit that I caught Neil deGrasse Tyson talking about from 8 years ago. As I heard him say this, I thought he was right on point, but I also felt sad that it's pretty obvious we're not thinking like this in computing. It turns out this is not just a problem in computing, but in science funding generally. That's what he was talking about, though he was quite polite about it: https://youtu.be/UlHOAUIIuq0?t=22m30s It strikes me that a very corrosive thought process in our society has been to politicize the notion of “how competitive we are” economically. Sure, that matters, but I see it more as a symptom than a cause of social problems. I hate seeing it brought up in discussions about education, because sure, competition is going to be a part of societal living, and in many educational environments, there's some aspect of competition to it (a story I heard from my grandfather from when he entered medical school was, “Look to your left. Look to your right. Only one of you will be graduating with a medical degree,” because that was the intended ratio along the bell curve), but bringing economic competitiveness into education misses the point badly. I understand where the impulse to focus on that comes from, because globalization tends to produce a much more competitive economic landscape, where people feel much more uncertain about basic questions they have to answer. Part of which is creating the life they want, but often people end up missing a significant part of actually creating it (if it's even feasible. What I see more often is a compromise, because there are only so many hours in a day, and only so much effort can be put into it) in the process of trying to create it. They get caught up in “doing,” as you said. As I've thought back on the '60s, it seems like while there was still competition going on, the emphasis was on a political competition, internationally, not economic. There was a significant technological component to that, because of the Cold War/nuclear weapons. The creation of ARPA and NASA was an effect of that. My understanding is we underwent a reorientation in the 1980s, because it was realized that there was too little attention paid to the benefits that a relatively autonomous economy can produce, killing off bad ideas, where what's being offered doesn't match with what people need or want, and allowing better ones to replace them. That's definitely needed, but I'm in agreement with Kay that what education should be about is helping people understand what they need. Perhaps we could start by telling today's students that if and when they have children, what their children need is to understand the basic thought-inventions of our society in an environment where they're more likely to get that. Instead, what we've been doing is treating schools like glorified daycare centers. Undergraduate education has been turned into much the same thing.

scroot on Sept 21, 2017 [–]

My understanding is we underwent a reorientation in the 1980s, because it was realized that there was too little attention paid to the benefits that a relatively autonomous economy can produce, killing off bad ideas, where what's being offered doesn't match with what people need or want, and allowing better ones to replace them.

There was a rightward swing in the late 1970s that took root in our political system, then commentariat, and then culture. It has never reverted. The term “neoliberalism” gets thrown around (usually by dweebs like me) but it's the precise term to use. Wendy Brown's recent book is probably the best overview of the topic in recent years. The cultural shift that was unleashed in that period is so insidious that you don't even notice it half the time. Think about dating apps/sites where users talk about their romantic lives using terms like “R.O.I.” Or people discussing ways to “optimize” their lives by making them more efficient. It's nuts. Steve Jobs' old “bicycle of the mind” chestnut is, in a way, emblematic of this way of thinking. He was talking about how the most “efficient” animal was a human with a bicycle. He wanted human thinking to be “more efficient.” If you listen to Kay, on the other hand, he's talking about something entirely different. The transcendent effect of literacy on mankind created the very possibility of civilization, for good or ill. Computing as an aid to thinking in the way the written word was previously could take this to the next, higher stage – one we cannot really describe or talk about because we don't even have the language to do so. But short term thinking, shareholder value, and the need for economic growth – these are and have been the pillars of our politics and culture for several decades now. No one says who that growth benefits, of course, which is why it's no coincidence that the maw of inequality has opened ever wider during the same period. If you're wondering where all the “good ideas” are, well, we don't have time for good ideas. We only have time for profitable ones, or at least ones that can be sold after a high valuation. The culture also trickled into the university, and then to funding (not just science funding, but funding for most fields. We need more than science to do new science). I have been on the bitch end of writing NSF grants for pretty ambitious projects, and the requirements are straight out of Kafka. They want you to demonstrate that you'll be able to do the things you're saying you'll hope to be able to do. That's not how it used to work. But the angle is always the same: they want something “innovative” that can be useful as immediately as possible. Useful for the economy, that is. They don't understand this undeniable fact: if you want amazing developments, you have to let passionate and smart people screw around and you have to pay them for it. The university used to be the place to screw around with ideas and methods. Now it's career prep.

I understand where the impulse to focus on that comes from, because globalization tends to produce a much more competitive economic landscape, where people feel much more uncertain about basic questions they have to answer

This kind of globalization is a choice, one made by powerful people with explicit interests. It was not inevitable. Right now I live in the wealthiest country that has ever existed on the planet. And right now many of its citizens are calling their elected leaders to beg them not to take away the sliver of health care that they have left. We serve the economy and not each other. When there's a big decision to make, our leaders wonder “how the market will react,” rather than how people will be affected. Last point: the idea of this thing called “the economy” as an object of policy is relatively new. Timothy Mitchell has an amazing chapter on it in his book Carbon Democracy. The 20th century was one where we allowed the field of economics to cannibalize all others. The 21st has not taken the chance to escape this.

alankay on Sept 21, 2017 [–]

We should get “Fast Company” to interview you – you'd do a better job! (Actually I think I did do a better job than their editing wound up with.) Your comments and criticism of the NSF are dead-on (and is the reason I gave up on NSF a few years ago – and I was on several of the Advisory Committees and could not convince the Foundation to be tougher about its funding autonomy – very trick for them admittedly because of the way it is organized and threaded through both Congress and OMB). One way to look at it is there is a sense of desperation that has grown larger and larger, and which manifests both in the powerless and the powered.

mmiller on Sept 22, 2017 [–]

What came to mind when I read your comments were some complaints I've had that relate to the “looking for the keys under the streetlight” fallacy. There are intuitions and anecdotes we can have about the unknowns, which is the best we can do about many things in the present, until they can be measured and tested. A problem I see often is there are people who believe that if it can't be measured, it's not part of reality. I find that the unknowns can be a very important part of working with reality successfully, and that what can be measured in the present can end up being not that significant. It depends on what you're looking for. As Kay and I have discussed here, efficiency is not irrelevant, but we agree it's not the only significant factor in a system that we all hope will produce the wealth needed for societal progress. What seems to be needed is some knowledge and ethics re. the wealth of society, ideally enacted voluntarily, as in the philanthropic efforts of Carnegie, and similar efforts. I happened to watch a bit of Ken Burns's doc. on the Vietnam War, and I was reminded that McNamara was a man of metrics. He wanted data on anything and everything that was happening to our forces, and that of the Viet Cong. He got reams of it, but there were people who asked, “Are we winning the hearts and minds?” There were no metrics on that. We didn't have a way to measure it, so the question was considered irrelevant. The best that could've been done was to get honest opinions from commanders in the field, who understood the war they were fighting, and were interacting with the civilian population, if people were willing to listen to that. In a guerrilla war, which is what that was, “hearts and minds” was one of the most important factors. Most of the rest could've been noise. I dovetail with your complaint about focus on the economy in policy, but for me, it's philosophical: It's not the government's job to be worrying about that so much. If you look at the Constitution, it doesn't say a thing about “shall maintain a prosperous economy,” or, “shall ensure an equitable economy,” or any of that. Sure, people want enough wealth to go around, but it's up to us to negotiate how that happens, not the government. I think unfortunately, politicians and voters, no matter their political stripe, have lost track of what the government's job is. I think, broadly, we treat it like an insurer, or banker of last resort. If things don't seem to work out the way we'd like, we appeal to government to magically make us whole (including economically). That's really missing the point of it. I could go into a whole thing about the medical system (I won't), but I'll say from the research I've done on it (which probably is not the best, but I made an effort of it), it is one of the most tragic things I've seen, because it is grossly distorted from what it could be, but this is because we're not respecting its function. As you've surmised with globalization, it's been set up this way by some interested people. It's a choice. I see a big knowledge problem with what's been done to it for decades. Doesn't it figure that people interested in healing people should be figuring out how to do it, to serve the most people who need their help, rather than people who have no idea how to do that thinking they should tell them how to do it? This relates back to your proposition about scientific research. Shouldn't research be left to people who know how to do that, rather than people who don't trying to micromanage how you do it? I think we'd be better off if people had a sense of understanding the limits of their own knowledge. I don't know what it is that has people thinking otherwise. The best term I can come up for it is “hubris.” Perhaps the more accurate diagnosis, as Kay was saying, is fear. It makes sense that that can cause people to put their nose in deeper than where it should be, but it's like a horde panicking around someone who's collapsed from cardiac arrest, which doesn't have the good sense to give someone who knows CPR some room, and then to allow medical personnel in, once they show up. It's looked to me like a feedback loop, and I shudder to think about where it will end up, but I feel pretty powerless to stop the process at the moment. I made some efforts in that direction, only to discover I have no idea what I'm really dealing with. So, with some regret, I've followed Sagan's advice (“Don't waste neurons on what doesn't work.”), left it alone, and directed my energy into areas I love, where I hope to make a meaningful contribution someday. The experience of the former has given me an interest in listening to scientists who have studied people, what they're really like. It seems like something I need to get past is what Jon Haidt has called the “rationalist delusion,” particularly the idea that rational thought alone can change minds. Not so.

alankay on Sept 20, 2017 [–]

Clear thoughts and summary!

mmiller on Sept 21, 2017 [–]

“We are 'story creatures' and it takes a lot of training and willpower to depart from 'fond stories and beliefs' to 'actually think things through'.” What your analysis did for me was help put two and two together, but yes, it “collided” with my notions of what an accomplishment it was, and what I had been led to believe that would lead to. What you exposed was that the reality of “what that would lead to” was quite different, and it explained the reality that was unfolding. I knew that Apollo was a big rocket (ironically, that was one thing that impressed me about it, but I thought how amazing it was that such a thing could be constructed in the first place, and work. Though, I thought many years later about just what you said, that the more fuel you add, the more the fuel is just expending energy moving itself!), and that there were only three people on it, though the “efficiency” perspective, relating that to how it did not contribute to further knowledge for space travel, didn't occur to me until you laid it out. I also knew from listening to Reagan's science advisor that NASA was heavily influenced by the goals of military contractors that had done R&D on various technologies in the '60s, and which exerted political pressure to put them to use, to get return on investment. He said something to the effect of, “People worry about the Military Industrial Complex. Well, NASA IS the Military Industrial Complex! People don't think of it that way, but it is.” Not too long after I heard you talk about this, I happened to hear about a simulator called the Kerbal Space Program (commonly referred to as KSP), and someone posted a video of a “ludicrous single-launch vehicle to Mars (and back)” in it. Even though I think I've heard that KSP does not completely use realistic physics, it drove your point home fairly well. Though, people would point out that none of the proposed missions to Mars have talked about a single-launch vehicle from surface to surface. All of the proposals I've heard about have talked about constructing the vehicle in orbit. KSP, though, assumes chemical propellant. https://youtu.be/mrjpELy1xzc “the real romance and its implications didn't happen in the general public and politicians.” In hindsight, I've been struck by that. When I took the time to learn about the history of the Apollo program, I learned that Apollo 11 made a big impression on people all over the world, but that was really it. I think as far as the U.S. was concerned, people were probably more impressed that it met a political goal, JFK's bold proclamation that we would get men to the Moon and back, and that it was a historic first, but there was no sense of, “Great! Now what?” It was just, “Yay, we did it! Now onto other things.” There's even been some speculation I've heard from politicos, who were in politics at the time, that we wouldn't have done any of the moon shots if Kennedy hadn't been assassinated, that it was sympathy for his legacy that drove the political will to follow through with it (if true, that's where the romance lay). Hardly anybody paid attention to it after 11, with the exception of Apollo 13, since there was the drama of a possible tragedy. Apollo 18 never got off the ground. The rocket was all set to go, but the program was scrubbed. People can look at the rocket, laid out in its segments sideways, at the Johnson Space Center in Houston.

alankay on Sept 21, 2017 [–]

James Fletcher – twice the head of NASA – had a very good speech that “the moon shot, and etc” were really about learning to coordinate 300,000 people and billions of dollars to accomplish something big in a relatively short time. (And that the US should use these kinds of experiences (wars included because the moon shot was part of the cold war), to pick “goals for good” and do these. Most of the old hands and historians of the moon shot point to the public in the 70s no longer being afraid of the Russians in the way they had been in the 50s, and the successful moon landings helped assuage their fears. The public in general was not interested in space travel, science, etc. and did not understand it or choose to understand it. I think this is still the case today.

mmiller on Sept 25, 2017 [–]

“Most of the old hands and historians of the moon shot point to the public in the 70s no longer being afraid of the Russians in the way they had been in the 50s, and the successful moon landings helped assuage their fears.” That's what I realized about 10 years ago. The primary political motivation for the space missions was to establish higher ground in a military strategic sense, and once that was accomplished, most people didn't care about it anymore. There was also an element of prestige to it, at least from Americans' perspective, that because we had reached a “higher” point in space than the Russians did, that gave us a sense of dominance over their extension of power. You know this already, but people should keep in mind that what got the ball rolling was the launch of Sputnik in 1957. The message that most people got from that was that the Russians controlled higher ground, militarily, and that we needed to capture that pronto, or else we were going to be at a disadvantage in the nuclear arms race. It also created a major push, as I understand, by the federal government to put more of an emphasis on math and science education, to seed the population of people who would be needed to pull that off. I've heard thinking that this created a generation of scientists and engineers who eventually came into industry, which created the technological products we eventually came to use. There's been a positive sense of that legacy from people who have reviewed it, but I've since heard from people who went through the “new math” that was taught through that push. They hated it with a passion, and said it turned them off to math for many years to come. The more positive aspect I like to reflect on is that Sputnik inspired young people to become interested in math, science, and engineering on their own, and they really experienced those disciplines. A nice portrayal of one such person is in the movie “October Sky,” based on Homer Hickam's autobiographical book, “Rocket Boys.”

pepijndevos on Sept 17, 2017 [–]

It seems to me the Raspberry Pi people have put out a lot of good work making hard things possible and transforming education. The video you posted reminded me about some of the work of Bret Victor, especially his interactive environments in his video on “inventing on principle”. Although what's missing there is the ability to connect and modify the environment itself. I still have to think a bit more about your link to Montessori, who has been a great inspiration for the school ( http://aventurijn.org ) that my parents started. Also in relation to what you said about teaching real math. Montessori has this system with beads and other countable cubes and pies to teach things like multiplying fractions, that is not used in most schools that call themselves “Montessori schools”.

alankay on Sept 24, 2017 [–]

Bret is a great thinker and designer

shalabhc on Sept 17, 2017 [–]

See what you think about what happens around 9:05

I saw two interesting things around 9:05. A 13 year old made an 'active essay' on the computer which contains not just text but also a dynamic interactive environment so the reader can follow along and even try out new ideas. This type of media is not prevalent today - essays written by 13 year olds today would be in Google/Word docs and contain only static text and static pictures (i.e. digitized paper), but no interactivity. There are ways to do interactivity today, of course, but they are not easy and not the default. Is this what you are getting at? The other interesting thing is how two tools - the drawing tool and animation tool - are made to work together, even when they were not created with each other in mind. IIUC the image is not a file format here but an object, but don't both tools then need to work with the same image protocol? I suppose you can always have adapters to connect different image protocols, but it doesn't seem like the best option. Still thinking about how much (or how little) shared knowledge is needed to make this scale to all types of objects.

alankay on Sept 18, 2017 [–]

My reason for drawing your attention to this section of the talk is to show some of the ideas (now 40-50 year old) were about “dynamic media” - of course live computing should be part of the combined media experience on a personal computer – and of course you should be able to do what are now called “mash ups”, but to be able to combine useful things easily and at will (it's crazy that this e.g. isn't even provided for maps in a general way on smartphones, tablets and PCs). But the larger point here is that if one is dealing with dynamic objects as originally intended, the objects can help greatly and safely in coordinating them. This shouldn't be more difficult than what we do in combining ordinary materials in our physical world (it should be even simpler!). In the system used for the demo – Smalltalk-78 – every thing in the system is a dynamic object – there are no “date structures”. This means in part that each object, besides doing its main purpose, can also provide useful help in using it, can include general protocols for “mashing up”, etc. We can do better today, but my whole point in the interview and in these comments is that once e.g. Engelbart showed us great ideas for personal computing, we should not adopt worse ideas (why would any reasonable people do this?), once dynamic media has been demonstrated in a comprehensive way, we should not go back to imitating static media in ways that preclude dynamic media (if you have dynamic media you can do static media but not vice versa!). Going back and doing Engelbart or Parc also makes no sense, because we have vastly more computing resources today than 50 years ago. We need to go forward – and -to think things through- ! – about what computers are, what we are, and how to use the best of both in powerful combinations. This was Licklider's dream from 1960, and some of it was built. The dream is still central to our thinking today because it was so large and good to be always beckoning us ahead.

shalabhc on Sept 18, 2017 [–]

Thank you for taking the time to respond, and I'd really appreciate if you can clarify my follow up below too!

every thing in the system is a dynamic object – there are no “date structures”

I'm still programming in data structures :/. I've seen many of your talks over the years and it took me quite a while to realize what you mean by objects (I think) is not just the textual specification (i.e. 'source code' in today's world), but rather a live, run-able thing that can be probed, inspected and made to do its thing, all by sending messages to it. In the Unix world this would be more akin to a long running server process, but with a much better unified, discover-able IPC mechanism (i.e. 'messaging'). The only thing that needs to be standardized here is the messaging mechanism itself. Larger processes would be constructed by just hooking up existing objects. Automatic persistence would mean these objects don't need to extract and store 'just data' outside themselves, etc. This model blurs the distinction between what today we call 'programming' (writing large gobs of text), what we call 'operations' (configuring and deploying programs), and what we call 'using' (e.g. reviewing, organizing my photos). Instead, for every case, I would be doing the same kind of operation - i.e. inspecting and hooking up objects - but the objects I'm working with would be different, and the UI could be different. This makes programming more interactive ('let me see if this object can talk to that object by actually connecting them' vs. 'let me see if I can write a large blob of text that satisfies the compiler, by simulating the computer in my head'). The other thing I notice is you don't slice the computation the same way that is so common today. E.g. today I write source code (form #1 of computation), which runs through a compiler to produce an executable file (form #2 of the same computation), which is then executed and loaded in memory (form #3, because now it merges with the data from outside itself). Form #1 is checked in to source control, form #2 is bundled for distribution and form #3 is rather transient. Instead, you're slicing computation on a different axis and all forms of the same computation are kept together - i.e. the specification, executable and runtime forms are one and the same 'object'. The decomposition happens by breaking down along functional boundaries. This means modification of the specification can happen anywhere I encounter one of these live objects, right then and there. I don't have to trace the computation to its 'source'. So my main question is - am I on the right track here?

Going back and doing Engelbart or Parc also makes no sense

I agree, but given the sad state of composition, even if we had some of those ideas today, it would seem like a step forward :) IMO, today we want to think of farms of computers as one large computer, and instead of programming in the small, we want to program all of them together.

alankay on Sept 19, 2017 [–]

Yes, you have the gist of our approach in the 60s and especially at Parc in the 70s. And the Doug McIlroy parts of Unix also got this (the “pipes” programming and other ideas). What I called “objects” ca 1966 was a takeoff from Simula and Sketchpad that was highly influenced by both biology, by the “processes” (a kind of virtual computer) that were starting to be manifested by time-sharing systems, and by my research community's discussions and goals for doing an “ARPAnet” of distributed computers. If you took the basic elements to be “computers in communication” you could easily get the semantics of everything else (even to simulate data structures if you still thought you needed them). So, yes, everything could be thought of as “servers”. Smalltalk at Parc was entirely structured this way (and the demo I made from one of the Parc Smalltalks for the Ted Nelson tribute shows examples of this). It's worth noting that you then have made a “software Internet”, and this can be mapped in many ways over a physical Internet. And so forth. This got quite missed. In a later interview Steve said that he missed a bunch of things from his visit to Parc in 1979. What was ironic was that the context of the interview was partly that the SW of the NEXT computer now did have these (in fact, not really). To be a bit more fair, big culprits in the miss in the 80s were Motorola and Intel for not making IC CPUs with Chuck Thacker's emulation architectures that we used at Parc to be able run ultra high level languages efficiently enough. The other big culprit was that you could do -something- and sell it for a few thousand dollars, whereas what was needed was something whose price tag in the early 80s would have been more like $10K. Note that a final culprit here is that the personal computer could not be seen for all it really was, and especially in the upcoming lives of most people. The average price of a car when the Mac came out was about $9K (that according to the web is about $20K today – the average price of a car today is about $28K). To me a really good personal computer is worth every penny of $28K – I'd love to be able to buy $28K of personal computer! One way to evaluate “the computer revolution” is to note not just what most people do with their computers in all forms, but what they are willing to pay. I think it will be a while before most people can see enough to put at least the value of a car on their “information and thinking amplifier vehicle”

shalabhc on Sept 19, 2017 [–]

It's worth noting that you then have made a “software Internet”, and this can be mapped in many ways over a physical Internet.

The more interesting/optimized ways to map this would be where a single object in the software internet somehow maps to multiple computers, either doing parallel computation or partitioned computation on each. I feel the semantics of mapping the object onto a physical computer would have to be encoded in the object itself. Perhaps some other kinds of higher level semantic model (i.e. not a 'software internet') might also be easy to map onto a physical internet. This is something I am interested in actively exploring. That is, how to build semantic models that are optimized for human comprehension of a problem, but can be directly run on farms of physical computers? Today a lot of the translation is done in our heads - all the way down to a fairly low level.

big culprits in the miss in the 80s were Motorola and Intel for not making IC CPUs with Chuck Thacker's emulation architectures

Maybe there is a feedback loop where the growth of Unix leads to hardware vendors thinking 'lets optimize for C', which then feeds the growth further? OTOH, even emulated machines are faster than hardware machines used to be.

I'd love to be able to buy $28K of personal computer!

Well, you can already buy $28K or more of computing resources and connect it to your personal device. It's not easy to get much value from this today, though.

mmiller on Sept 19, 2017 [–]

“The more interesting/optimized ways to map this would be where a single object in the software internet somehow maps to multiple computers, either doing parallel computation or partitioned computation on each. I feel the semantics of mapping the object onto a physical computer would have to be encoded in the object itself.” You might be interested in Alan Kay's '97 OOPSLA presentation. He talked in a similar vein to what you're talking about: https://youtu.be/oKg1hTOQXoY?t=26m45s Inspired by what he said there, I tried a little experiment in Squeak, which worked, as far as it went (scroll down the answer a bit, to see what I'm talking about, here): https://www.quora.com/What-features-of-Smalltalk-are-importa… I only got that far with it, because I realized once I did it that I had more work to do in understanding what to do with what I got back (mainly translating it into something that would keep with the beauty of what I had started)… “Maybe there is a feedback loop where the growth of Unix leads to hardware vendors thinking 'lets optimize for C', which then feeds the growth further? OTOH, even emulated machines are faster than hardware machines used to be.” There is a feedback loop to it, though as development platforms change, that feedback gets somewhat attenuated. As I recall, what you describe with C happened, but it began in the late '90s, and into the 2000s. I started hearing about CPUs being optimized to run C faster at that point. I once got into an argument with someone on Quora about this re. “If Lisp is so great, why aren't more people using it?” I used Kay's point about how bad processor designs were partly to blame for that, because a large part of why programmers make their choices has to do with tradition (this gets translated to “familiarity”). Lisp and Smalltalk did not run well on the early microprocessors produced by these companies in the 1970s. As a consequence, programmers did not see them as viable for anything other than CS research, and higher-end computing (minicomputers). A counter to this was the invention of Lisp machines, with processors designed to run Lisp more optimally. A couple companies got started in the '70s to produce them, and they lasted into the early '90s. One of these companies, Symbolics, found a niche in producing high-end computer graphics systems. The catch, as far as developer adoption went, was these systems were more expensive than your typical microcomputer, and their system stuff (the design of their processors, and system software) was not “free as in beer.” Unix was distributed for free by AT&T for about 12 years. Once AT&T's long-distance monopoly was broken up, they started charging a licensing fee for it. Unix eventually ran reasonably well on the more popular microprocessors, but I think it's safe to say this was because the processors got faster at what they did, not that they were optimized for C. This effect eventually occurred for Lisp as well by the early '90s, which is one reason the Lisp machines died out. A second cause for their demise was the “AI winter” that started in the late '80s. However, by then, the “tradition” of using C, and later C++ for programming most commercial systems had been set in the marketplace. The pattern that seems to repeat is that languages become popular because of the platforms they “rode in on,” or at least that's the perception. C came on the coattails of Unix. C++ seems to have done this as well. This is the reason Java looks the way it does. It came out of this mindset. It was marketed as “the language for the internet,” and it's piggybacked on C++ for its syntax and language features. At the time the internet started becoming popular, Unix was seen as the OS platform on which it ran (which had a lot of truth to it). However, a factor that had to be considered when running software for Unix was portability, since even though there were Unix standards, every Unix system had some differences. C was reasonably portable between them, if you were careful in your implementation, basically sticking to POSIX-compliant libraries. C++ was not so much, because different systems had C++ compilers that only implemented different subsets of the language specification well, and didn't implement some features at all. C++ was used for a time in building early internet services (combined with Perl, which also “rode in” on Unix). Java was seen as a pragmatic improvement on C++ among software engineers, because, “It has one implementation, but it runs on multiple OSes. It has all of the familiarity, better portability, better security features, with none of the hassles.” However, it completely gave up on the purpose of C++ (at the time), which was to be a macro language on top of C in a similar way to how Simula was a macro language on top of Algol. Despite this, it kept C++'s overall architectural scheme, because that's what programmers thought was what you used for “serious work.” From a “power” perspective, one has to wonder why programmers, when looking at the prospect of putting services online, didn't look at the programming architecture, since they could see some problems with it pretty early, and say to themselves, “We need something a lot better”? Well, this is because most programmers don't think about what they're really dealing with, and modeling it in the most comprehensive way they can, because that's not a concept in their heads. Going back to my first point about hardware, for many years, the hardware they chose didn't give them the power so they could have the possibility to think about that. As a result, programmers mostly think about traits, and the community that binds them together. That gives them a sense of feeling supported in their endeavors, scaling out the pragmatic implementation details, because they at least know they can't deal with that on their own. Most didn't think to ask (including myself at the time), “Well, gee. We have these systems on the internet. They all have different implementation details, yet it all works the same between systems, even as the systems change… Why don't we model that, if for no other reason than we're targeting the internet, anyway? Why not try to make our software work like that?” On one level, the way developers behave is tribal. Looked at another way, it's mercantilistic. If there's a feedback loop, that's it. “OTOH, even emulated machines are faster than hardware machines used to be.” What Kay is talking about is that the Alto didn't implement a hard-coded processor. It was soft-microcoded. You could load instructions for the processor itself to run on, and then load your system software on top of that. This enabled them to make decisions like, “My process runs less efficiently when the processor runs my code this way. I can change it to this, and make it run faster.” This will explain Kay's use of the term “emulated.” I didn't know this until a couple years ago, but at first, they programmed Smalltalk on a Data General Nova minicomputer. When they brought Smalltalk to the Alto, they microcoded the Alto so that it could run Nova machine code. So, it sounds like they could just transfer the Smalltalk VM binary to the Alto, and run it. Presumably, they could even transfer the BCPL compiler they were using to the Alto, and compile versions of Smalltalk with that. The point being, though, that they could optimize performance of their software by tuning the Alto's processor to what they needed. That's what he said was missing from the early microprocessors. You couldn't add or change operators, and you couldn't change how they were implemented.

alankay on Sept 20, 2017 [–]

Actually … only the first version of Smalltalk was done in terms of the NOVA (and not using BCPL). The subsequent versions (Smalltalk-76 on) were done by making a custom virtual machine in the Alto's microcode that could run Smalltalk's byte codes efficiently. The basic idea is that you can win if the microcode cycles are enough faster than the main memory cycles so that the emulations are always waiting on main memory. This was generally the case on the Alto and Dorado. Intel could have made the “Harvard” 1st level caches large enough to accommodate an emulator – that would have made a big difference. (This was a moot point in the 80s)

mmiller on Sept 20, 2017 [–]

I know this is getting nit-picky, but I think people might be interested in getting some of the details in the history of how Smalltalk developed. Dan Ingalls said in “Smalltalk-80: Bits of History”: “The very first Smalltalk evaluator was a thousand-line BASIC program which first evaluated 3 + 4 in October 1972. It was followed in two months by a Nova assembly code implementation which became known as the Smalltalk-72 system.” The first Altos were produced, if I have this right, in 1973. I was surprised when I first encountered Ingalls's implementation of an Alto on the web, running Smalltalk-72, because the first thing I was presented with was, “Lively Web Nova emulator”, and I had to hit a button labeled “Show Smalltalk” to see the environment. He said what I saw was Nova machine code from a genuine ST-72 image, from an original disk platter. I take it from your comment that you're saying by the time ST-76 was developed, the Alto hardware had become fast enough that you were able to significantly reduce your use of machine code, and run bytecode directly at the hardware level. I could've sworn Ingalls said something about using BCPL for earlier versions of Smalltalk, but quoting out of “Bits of History” again, Ingalls, when writing about the Dorado and Smalltalk-80, said of BCPL that the compiler you were using compiled to Alto code, but … “As it turned out, we only used Bcpl for initialization, since it could not generate our extended Alto instructions and since its subroutine calling sequence is less efficient than a hand-coded one by a factor of about 3.”

alankay on Sept 21, 2017 [–]

The Alto didn't get any faster, and there was not a lot of superfast microcode RAM (if we'd had more it would have made a huge difference). In the beginning we just got Smalltalk-72 going in the NOVA emulator. Then we used the microcode for a variety of real-time graphics and music (2.5 D halftone animation, and several kinds of polytimbral timbre synthesis including 2 keyboards and pedal organ). These were separate demos because both wouldn't fit in the microcode. Then Dan did “Bitblt” in microcode which was a universal screen painting primitive (the ancestor of all others). Then we finally did the byte-code emulator for Smalltalk-76. The last two fit in microcode, but the music and the 2.5 D real-time graphics didn't. The Notetaker Smalltalk (-78) was a kind of sweet spot in that it was completely in Smalltalk except for 6K bytes of machine code. This was the one we brought to life for the Ted Nelson tribute.

shalabhc on Sept 20, 2017 [–]

Thanks for the long write up. I found it very interesting.

You might be interested in Alan Kay's '97 OOPSLA presentation

Oh yeah I have actually seen that - probably time to watch it again.

Well, this is because most programmers don't think about what they're really dealing with

Agree with that. Most people are working on the 'problem at hand' using the current frame of context and ideas and focus on cleverness, optimization or throughput within this framework. When changing the frame of context may in fact be much better.

What Kay is talking about is that the Alto didn't implement a hard-coded processor. It was soft-microcoded.

Interesting. I wonder if FPGAs could be used for something similar - i.e. program the FPGAs to run your bytecode directly. But I'm speculating because I don't know too much about FPGAs.

alankay on Sept 20, 2017 [–]

Yes re: FPGAs – they are definitely the modern placeholder of microcode (and better because you can organize how the computation and state are hooked together). The old culprit – Intel – is now offering hybrid chips with both an ARM and a good size patch of FPGA – combine this with a decent memory architecture (in many ways the hidden barrier these days) and this is a pretty good basis for comprehensive new designs.

alankay on Sept 20, 2017 [–]

The more interesting/optimized ways to map this would be where a single object in the software internet somehow maps to multiple computers …

Yes, this is the essence of Dave Reed's 1978 MIT thesis on the design of a distributed OS for the Internet of “consistent” objects mapped to multiple computers. In the early 2000s we had the opportunity to test this design by implementing it. This led to a series of systems called “Croquet” and an open source system and foundation called “Open Cobalt”.

how to build semantic models that are optimized for human comprehension of a problem, but can be directly run on farms of physical computers?

Keep on with this …

shalabhc on Sept 21, 2017 [–]

Keep on with this …

This is still kind of a mishmash of early thoughts and I have a couple of different lines of thought, which I hope will come together. I'll start with a couple of observations: 1. Most programming languages and DSLs are uni-directional - the computer doesn't talk back to the human in the same language. 2. The mental models (not the language) humans use to communicate to each other, even when using a lot of rigor and few ambiguities, is often different than the languages and models used for computation. The first idea is: there are some repeating structures in mental models. We think new concepts in terms of old by first thinking the structures (which are few and axiomatic, like the structure/function words in English) and then materializing the content, as well as refining the structures. E.g. I can say to a non-programmer that 'classes contain methods' and they kind of get the structure without knowing the content. In my mind this is represented as a graph, where the 'contains' relationship is an edge that connects two 'content nodes'.

 [something called class] --(contains)-> [something called method]

If I follow up with 'methods contain code', they can reason that classes indirectly contain code, without even knowing what these things actually mean! So 'contains' is kind of a universal concept - it applies to abstract content and physical content in a similar way. Another universal connection is 'abstraction of', this implies one node (the abstract thing) is related to other nodes (the concrete things) in a specific way. Maybe structures can be made composable, and we can operate on graphs structurally, without knowing what the content means? While another operation might eventually figure out what the content means. The main assumption here is my thoughts are organized as graphs, where connections are both universal also domain specific but of few kinds. Can I talk to the computer in terms of such graphs? The second idea is: I want to combine high level concepts and strategies from from somewhat different domains. E.g. if I know different strategies for 'distributing things into bins' (consistent hashing, sharding, etc.), I invoke this 'idea' manually whenever I have see a situation which looks like 'distribute things into bins' and make a choice - irrespective of scale. Can I get the computer to do this for me instead? So the final thing here is to get to something like this: I take an idea (i.e. a node in a graph) from the distributed computing domain, merge it with a definition (another node) of a computation I created (e.g. persistence strategies), and have the computer offer options on how to distribute that computation (i.e. 'distributed persistence strategies'). Then I can make choices and combine it with a 'convert idea to machine code' strategy and generate a program. This is all a bit abstract at this point, but I'm also trying to find where this overlaps with prior art.

alankay on Sept 21, 2017 [–]

A clarifying comment here. When one thinks in terms of what I called objects ca 1966, one is talking about entities that from the outside are identical to what we think of as computers (and this means not just sending messages and getting outputs, but that we don't get to look inside with our messages, and our messages don't get to command, unless whatever is going on in the interior of the computer has decided to allow. So from the outside, there are no imperatives, only requests and questions. Another way to look at this is that an object/computer is a kind of “server” (I worry to use this term because it also has “pop” meanings, but it's a good term.) This is sometimes called “total encapsulation”. From this standpoint, we don't know what's inside. Could be just hardware. Could be variables and methods. Could be some form of ontology. Or mix and match. This is the meaning of computers on a network, especially large worldwide ones. The basic idea of “objects” is that what is absolutely needed for doing things at large scale, can be supplied in non-complex terms for also doing the small scale things that used to be done with data structures and procedures. Secondarily, some of the problems of data structures and procedures at any scale can be done away with by going to the “universal servers in systems” ideas. Similarly, what we have to do for critical “data structures” – such as large scalings, “atomic transactions”, versions, redundancy, distribution, backup, and “procedural fields” (such as the attribute “age”) are all more easily and cleanly dealt with using the idea of “objects”. One of the ways of looking at what happened in programming is that many if not most of the naive ways to deal with things when computers were really small did not scale up, but most programmers wanted to stay with the original methods, and they taught next gen programmers the original methods, and created large fragile bodies of legacy code that requires experts in the old methods to maintain, fix, extend …

shalabhc on Sept 21, 2017 [–]

So from the outside, there are no imperatives, only requests and questions.

This threw me off a bit as Smalltalk collections have imperative style messages for instance.

Could be some form of ontology.

This remark helped me find some clarity. I want the computer to help me do cross ontological reasoning and mapping. For instance, if I want to compute geometry, how do I map the ontology of 'geometry' onto the ontology of 'smalltalk'? I 'think up' the mapping, but it would be great if the computer helps me here too. Mapping 'smalltalk' onto 'physical machines' is another ontological mapping. The 'mapping of ontologies' is itself an ontology. In large systems there are a lot of ontological 'views' and 'mappings' at play. I want to inspect and tweak each independently using the language on the ontology, and have the computers automatically map my requests to the physical layer in an efficient way. This is not possible in systems today because there an incredible amount of pre-translation that happens so a high level questions cannot be directly answered by the system - I have to track it down manually to a different level. Maybe the answer is to define the ontologies as object collections and have them talk to each other and figure it out. I want to tweak things after the system is up, of course, so I could send an appropriate message (e.g. 'change the bit representation of integers' or 'change the strategy used in mapping virtual objects to physical') and everything affected would be updated automatically (is this 'extreme late binding'?).

alankay on Sept 22, 2017 [–]

Yes, “collections” and other such things in Smalltalk are “the Christian Scientists with appendicitis”. Our implementations were definitely compromises between seeing how to be non-imperative vs already having the “devil's knowledge” of imperative programming. One of the notions we had about objects is that if we had to do something ugly because we didn't have a better idea, then we could at least hide it behind the encapsulation and the fact that message sending in the Smalltalks really is a request. Another way of looking at this is if an “object” has a “setter” that directly affects a variable inside then you don't have a real object! You've got a data structure however much in disguise. Another place where the “sweet theory” was not carried into reality was in dependencies of various kinds. Only some important dependencies were mitigated by the actual Smalltalks. Two things that helped us were that we did many on the fly changes to the system over 8-10 years – about 80 system releases – and including a new language every two years. This allowed to avoid getting completely gobbed up. The best and largest practical attempt at an ontology is in Doug Lenat's CYC. The history of this is interesting and required a number of quite different designs and implementations to gather understanding.

shalabhc on Sept 25, 2017 [–]

Yes, “collections” and other such things in Smalltalk are “the Christian Scientists with appendicitis”.

Interesting to hear this perspective - drives home the point that we shouldn't just stop at generic late bound data structures.

Only some important dependencies were mitigated by the actual Smalltalks.

Dependency management in today's systems is just mind numbing. If only we had a better way to name and locate these.

mmiller on Sept 25, 2017 [–]

One of the things I've realized is that using names for locating what's needed (I assume we're talking about the same idea) is part of the problem. At small scales it's fine. As systems get bigger, it becomes a problem. The internet went through this. When it started as the Arpanet, there was (if I remember correctly) one guy who kept the directory of names for each system on the network. The network started small, so this could work. As it grew into the thousands of nodes, this became less manageable, partly because there started to be duplicate requests for the same name for different nodes–naming conflicts, which is why DNS was created, and why ICANN was ultimately created, to settle who got to use which names. I doubt something like that, though, would scale properly for code, though many organizations have tried that, by having software architects in charge of assigning names to entities within programs. The problem then comes when companies/organizations try to link their systems together to work more or less cooperatively. I heard despondent software engineers talk about this 15 years ago, saying, “This is our generation's Vietnam.” (They didn't lack for the ability to exaggerate, but the point was they could not “win” with this strategy.) They were hoping to build this idea of the semantic web, but different orgs. couldn't agree on what terms meant. They'd use the same terms, but they would mean different things, and they couldn't make naming things work across domains (“domains” in more than one dimension). So, we need something different for locating things. Names are fine for humans. We could even have names in code, but they wouldn't be used for computers to find things, just for us. If we need to disambiguate, we can find other features to help, but computing needs something, I think, that identifies things by semantic signifiers, so that even though we use the same names to talk about them, computers can disambiguate by what they actually need by function. It wouldn't get rid of all redundancy, because humans being humans, economics and competition are going to promote some of that, but it would help create a lot more cooperation between systems.

scroot on Sept 28, 2017 [–]

The thing about DNS and naming is that there were a lot of ideas flying around, some of them in the big standards committees. X.400 and X.500 were the OSI standards for messaging and directory services that were going to handle finding entities using specific attributes rather than with direct names or even straight hierarchical naming (like DNS finally used). It's interesting to read all the old stuff – I had to sift through much of it a few years back when I wrote my dissertation on the early history of DNS (a cure for insomnia). I wonder with the Internet now if anything effectively different is even possible, considering that it's no longer a small network but everywhere like the air we breathe.

shalabhc on Sept 29, 2017 [–]

I wonder with the Internet now if anything effectively different is even possible, considering that it's no longer a small network but everywhere like the air we breathe.

You could slowly bootstrap a new system on the existing one, but you'd need a fleshed out design first :) Everything is replaceable, IMO, even well established conventions and standards, if something compelling comes along. The CCN ideas are related to naming as well. Maybe the ideas could be extended to handle 'objects' rather than just 'content'.

scroot on Sept 29, 2017 [–]

Hardware architecture is the horizon of my knowledge. But one thing I've always wondered is this: why not just have memory addresses inside a computer map to local IPv6 addresses, then have some other “chip” that can distinguish between non local IP addresses that would, in a perfect object world, point to places in memory on another remote machine? Obviously there would need to be some kind of virtualization of the memory but hopefully you get the idea. Not exactly related to naming but whatever.

shalabhc on Sept 29, 2017 [–]

Interesting - is the broader idea here that there is a virtual machine that spans multiple physical machines? Instead of virtual 'memory access', why not model this as a virtual 'software internet'?

scroot on Sept 29, 2017 [–]

I don't even think you need a VM, really. Just have this particular computer equipped with some soft-core that handles IP from the outside. The memory mapping, since it's just IPv6, can determine whether you are dealing with information from the outside world (non-localhost ips) or your own system (local ips). Because logically they are already different blocks in memory, they're already isolated. With something like that you might be able to have “pure objects” floating around the internet. Of course your computer's interpretation of a network object is something it has to realize inside of itself (kind of like the way you interpret the words coming from someone else's mouth in your own head, realizing them internally), but you will always be able to tell that “this object inside my system came from elsewhere” for its whole lifecycle. Maybe you could even have another soft-core (FPGA like) that deals with brokering these remote objects, so you can communicate changes to an incoming object that you want to send a message to. This is much more like communication between people, I think.

shalabhc on Sept 29, 2017 [–]

I don't even think you need a VM, really.

I mean a VM as in the idea that you are programming an abstract thing, not a physical thing. Not a VM as in a running program. You could emulate the memory mapper in software first - hardware would be an optimization. The important point is 'memory mapper' sounds like the semantics would be `write(object_ip, at_this_offset, these_bytes)`, but what you really want IMO is `send(object_ip, this_message)`. That is, the memory is private and the pure message is constructed outside the object. You still need the mapping system to map the object's unique virtual id to a physical machine, physical object. So having one IP for each of these objects could be one way. Alan Kay mentioned David Reed's 1978 thesis (http://publications.csail.mit.edu/lcs/pubs/pdf/MIT-LCS-TR-20…) which develops these ideas (still reading). In fact, a lot of 'recently' popular ideas seem to be related to the stuff in that thesis (e.g. 'psuedo-time')

shalabhc on Sept 26, 2017 [–]

One of the things I've realized is that using names for locating what's needed (I assume we're talking about the same idea) is part of the problem.

I don't think naming itself a problem if you have a fully decentralized system. E.g., each agent (org or person) can manage their namespace any way they choose in a single global virtual namespace. I'm imagining something like ipfs/keybasefs/upspin, but for objects, not files, and with some immutability and availability guarantees. But yes, there should probably be other ways to find these things, using some kind of semantic lookup/negotiation.

alankay on Sept 28, 2017 [–]

What he means is that names are a local convention, and scaling soon obliterates the conventions. Then you need to go to descriptions that use a much smaller set of agreed on things (and you can use the “ambassador” idea from the 70s as well).

scroot on Sept 28, 2017 [–]

A few year ago when I was doing some historical research about DNS, I came across quite a few interesting papers that all discussed “agents” in a way that seemed based on some shared knowledge/assumptions people had at the time. In particular, these would be agents for locating things in the “future internetwork”. There's a paper by Postel and Mockapetris that comes to mind. Is this an example of “ambassadors”?

shalabhc on Sept 29, 2017 [–]

Ah I see, we're taking about interoperability, not just naming. This is related to my original interest in language structure words and ontologies. The idea there is that the set of 'relationships' between things is small and universal (X 'contains' Y, A 'is an abstraction of' B) and perhaps can be used to discover and 'hook up' two object worlds that are from different domains.

alankay on Sept 28, 2017 [–]

Yes, this got very clear in a hurry even in the ARPAnet days, and later at Parc. (This is part of the Licklider “communicating with aliens” problem.) Note that you could do a little of this in Linda, and quite a bit more in a “2nd order Linda”. I've also explained the idea of “processes as 'ambassadors'” in various talks (including a recent one to the “Starship Congress”).

shalabhc on Sept 22, 2017 [–]

Could be just hardware. Could be variables and methods. Could be some form of ontology. …
more easily and cleanly dealt with using the idea of “objects”.

OK after this sitting in my mind for a bit longer something 'clicked'. What I'm thinking now is that there are many types of 'computer algebra' that can be designed. Data structures and procedures are only one such algebra - but they have taken over almost all of our mainstream thinking. So instead of designing systems with better suited algebra, we tend to map problems back to the DS+procedures algebra quickly. Smalltalk is well suited to represent any computer algebra (given the DS/procedure algebra is implemented in some 'objects', not the core language).

created large fragile bodies of legacy code that requires experts in the old methods to maintain, fix, extend

If I understand correctly you are saying that better methods would involve objects and 'algebra' that perhaps don't involve data structures and procedures at all, even all the way down for some systems.

alankay on Sept 22, 2017 [–]

Mathematics is a plural for a reason. The idea is to invent ways to represent and infer that are not just effective but help thinking. I don't think Smalltalk is well suited to represent any algebra (the earliest version (-72) was closer, and the next phase of this would have been much closer as a “deep” extensible language. A data structure is something that allows fields to be “set” from the outside. This is not a good idea. My original approach was to try to tame this, but I then realized that you could replace “commands” with “requests” and imperatives with setting goals.

shalabhc on Sept 22, 2017 [–]

and the next phase of this would have been much closer as a “deep” extensible language.

Are these ideas (and the 'address space of objects') elaborated on somewhere?

A data structure is something that allows fields to be “set” from the outside. This is not a good idea. My original approach was to try to tame this, but I then realized that you could replace “commands” with “requests” and imperatives with setting goals.

I agree with in principle - but I'm having trouble imagining computing completely without data structures though (and am reading 'Early History of Smalltalk' to see if it clicks.)

alankay on Sept 23, 2017 [–]

You need to have “things that can answer questions”. I'd like to get the “right answer” when I ask a machine for someone's date of birth, and similarly I'd like to get the right answer when I ask for their age. It's quite reasonable that the syntax in English is the same. ? Alan's DOB ? Alan's age Here “?” is a whole computer. We don't know what it will do to answer these questions. One thing is for sure: we are talking to a -process- not a data structure! And we can also be sure that to answer the second it will have to do the first, it will have to ask another process for the current date and time, and it will have to do a computation to provide the correct answer. The form of the result could be something static, but possibly something more useful would to have the result also be a process that will always tell me “Alan's age” (in other words more like a spreadsheet cell (which is also not “data” but a process)). If you work through a variety of examples, you will (a) discover that questions are quite independent of the idea of data, and (b) that processes are the big idea – its just that some of them change faster or slower than others. Add in a tidy mind, and you start wanting languages and computing to deal with processes, consistency, inter-relations, and a whole host of things that are far beyond data (yet can trivially simulate the idea in the few cases its useful). On the flip side, you don't want to let just anybody change my date of birth willy nilly with the equivalent of a stroke of a pen. And that goes for most answers to most questions. Changes need to be surrounded by processes that protect them, allow them to be rolled back, prevent them from being ambiguous, etc. This is quite easy stuff, but you have to start with the larger ideas, not with weak religious holdovers from the 50s (or even from the way extensional way math thinks via set theory).

shalabhc on Sept 25, 2017 [–]

Thank you for the elaboration! (And for anyone else reading this thread I found an old message along the same topic: http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-…) I'm thinking along these lines now: decompose systems along lines of 'meaning', not data structures (data structures add zero meaning and are a kind of 'premature materialization'), design messages first, late bind everything so you have the most options available for implementation details, etc. The other thing I'm thinking is why have only one way to implement the internal process of an object/process? There are often multiple ways to accomplish goals, so allow multiple alternative strategies for responding to the same message and let objects choose one eventually. Edit: wanted to add that IIUC, 'messages' doesn't imply an implementation strategy - i.e. they are messages in the world of 'meaning'. In the world of implementation, they may just disappear (inlined/fused) or not (physical messages across a network), depending on how the objects have materialize at a specific point in time.

mmiller on Sept 26, 2017 [–]

You've got the right idea. If you think about how services on the internet operate, they don't have one way of implementing their internal processes, either. I finally got the idea, after listening to Kay for a while, that even what we call operating systems should be objects (though, as Xerox PARC demonstrated many years ago, there is a good argument to be made that what we call operating systems need a lot of rethinking in this same regard, ie. “What we call X should be objects.”). It occurred to me recently that we already have been doing that via. VMs (through packages such as VMWare and VirtualBox), though in pretty limited ways. Incidentally, just yesterday, I answered a question from a teacher on Quora who wanted advice on how to teach classes and methods to a student in Python, basically saying that, “The student is having trouble with the concepts of classes and methods. How can I teach those ideas without the other OOP concepts?” (https://www.quora.com/Can-I-explain-classes-and-objects-with…) It prompted me to turn the question around, and really try to communicate, “You can teach OOP by talking about relationships between systems, and semantic messaging.” If they want to get into classes and methods later, as a way of representing those concepts, they can. What came to mind as a way around the class/method construct was a visual environment in which the student could experience the idea of different systems communicating through common interfaces, and so I proposed EToys as an alternative to teaching these ideas in Python. I also put up one of Kay's presentations on Sketchpad, demonstrating the idea of masters and instances (which you could analogize to classes and object instances). I felt an urge, though, to say something much more, to say, “You know what? Don't worry about classes and methods. That's such a tiny concept. Get the student studying different kinds of systems, and the ways they interact, and make larger things happen,” but I could tell by the question that, dealing with the situation at hand, the class was nothing close to being about that. It was a programming class, and the task was to teach the student OOP as it's been conceived in Python (or perhaps so-called “OOP”. I don't know how it conceives of the concept. I haven't worked in it), and to do it quickly (the teacher said they were running out of time). The question has gotten me thinking for the first time that introducing people to abstract concepts first is not the right way to go, because by going that route, one's conceptions of what's possible with the idea become so small that it's no wonder it becomes a religion, and it's no wonder our systems don't scale. As Kay keeps saying, you can't scale with a small conception of things, because you end up assuming (locking down) so much, it's impossible for its morphology to expand as you realize new system needs and ways of interacting. The reason for this is that programming is really about, in its strongest conception, modeling what we know and understand about systems. If we know very little about systems that already exist, their strengths and weaknesses, our conception of how semantic connections are made between things is going to be very limited as well, because we don't know what we don't know about systems, if we haven't examined them (and most people haven't). The process of programming, and mastering it, makes it easy enough to tempt us to think we know it, because look, once we get good enough to make some interesting things happen (to us), we realize it offers us facilities for making semantic connections between things all day long. And look, we can impress people with that ability, and be rewarded for it, because look, I used it to solve a problem that someone had today. That's all one needs, right?… Kay has said this a couple different ways. One was in “The Early History of Smalltalk”. He asked the question, “Should we even teach programming?” Another is an argument he's made in a few of his presentations: Mathematics without science is dangerous.

shalabhc on Sept 25, 2017 [–]

(also, for the benefit of anyone else reading this thread, the following section written in 1993 talks more about these ideas: http://worrydream.com/EarlyHistoryOfSmalltalk#oostyle)

scroot on Sept 21, 2017 [–]

The main assumption here is my thoughts are organized as graphs

Herbert Simon talks a lot about this in Sciences of the Artificial. It turns out most of human thinking is just lists. I'm not sure if that still stands in the field of psychology (my version of the book is pretty old, from the 80s). There's a good book (a little dense though) that might help with the more abstract thinking in the direction you're going. It's called “Human Machine Reconfigurations” and it's one of the more clever books I've come across on human machine interaction, written by an anthropologist/sociologist who also worked at PARC. So often the human part is what gets lost here.

shalabhc on Sept 21, 2017 [–]

Thanks for the references! Sciences of the Artificial is already on my list.

The main assumption here is my thoughts are organized as graphs

I realize this would be better phrased as “the information I'm trying to communicate is organized as graphs'.

mmiller on Sept 19, 2017 [–]

I think your analogy is going in a better direction. I had the same idea after taking a look at Squeak for a while. What would need to be added to your analogy is a notion of design. You see, in Smalltalk, for example, your programming takes place in the messages. So, even the daemons (which, for the sake of argument, we could think of as analogies to objects), which would be the senders and receivers of messages would be made out of the same stuff, not C/C++. So, this is a pretty dramatic departure from the way Unix operates. What this should suggest is that the semantic connection between objects is late-bound. Think of them as servers. Secondly, in the typical Smalltalk implementation, there is still compilation, but it's incremental. You compile expressions and methods, not the whole body of source code for the whole system. What's really different about it is since the semantic actions are late-bound, you can even compile something while a thread is executing through the code you're compiling. So, you get nearly instant feedback on changes for free. Bret Victor's notion of programming environments blurs the axes you're talking about even more, so that you don't have to do two steps to see your change, while a thread is running (edit, then compile). You can see the effect of the change the moment you change an element, such as the upper bound of an iterative loop. To make it even more dynamic, he tied GUI elements (sliders) to such things as the loop parameters, so that you don't have to laboriously type the values to try them out. You can just change the slider, and see the effects of the change in a loop's range very quickly, such that it almost feels like you're using a tool-based design space, rather than programming. I don't know how this was done in ST-78, but in ST-80, at least in accounts I've heard from people who've used versions of it, and in the version of Squeak I've used from time to time, the source code is not technically stored with the class object, though the system keeps references to the appropriate pieces of source code, and their revisions, mapping them to the classes in the system, so that when you tell the system you want to look at the source code for a class, it pulls up the appropriate version of the code in an editor. Source code is stored in a separate file, and Smalltalk has a version control system that allows reviewing of source code edits, and reversion of changes (undo). The class object typically exists in the Smalltalk image as compiled code. There are many things that are different about this vs. what you typically practice in CS, but addressing your point about data, in OOP, objects are supposed to take the place of data. In OOP, data contains its own semantics. It inverts the typical notion of procedures acting on data. Instead, data contains procedures. It's an active “live” part of the programming that you do. So, yes, data is persisted, along with its procedures. A simple example in Smalltalk is: 2 + 2. If we analyze what's going on, the “2”'s are the pieces of data, the objects/servers, and ”+“ is used to reference a method in one of the data instances, but it doesn't stop there. The “2” objects communicate with each other to do the addition, getting the result: 4. As you can tell implicitly, “2 + 2” is also source code, to generate the semantic actions that generate the result.

alankay on Sept 19, 2017 [–]

“More or less” … I can see that it's hard after decades of “data-centric” perspectives to think in terms of “computers” rather than “data”, and about semantics rather than pragmatics. It's not “data contains procedures” but that objects are (a) semantically computers, and are impervious to attack from the outside (a computer has to let an attack happen via its own internal programming), and (b) what's inside can be anything that is able to deal with messages in a reasonable way. In Smalltalk, these are more objects (there are only objects in Smalltalk). The way the internals of typical Smalltalk objects are organized could be done better (we used several schemes, but all of them had variables and methods (which were also objects). So “2” is not “data” in Smalltalk (unfortunately, it is in Java, etc.) We had planned that the interior of objects should be an “address space of objects” because this is a better and more recursive way to do modularization, and to allow a different inter-viewing scheme (we eventually did some of this for the Etoys system for children about 20 years ago). But the physical computers at Parc were tiny, and the code we could run was tiny (the whole system in the Ted Nelson demo video was a little over 10,000 lines of code for everything). So we stayed with our top priority: to make a real and highly interactive system that was very comprehensive about what it and the user could do together.

mmiller on Sept 21, 2017 [–]

Taking your feedback into consideration, I had the thought that it would be more accurate to talk about things like “2” in an OOP context as symbols with semantics (which provide meaning to them), not data, since “data” connotes more a collection of inputs/quantities, where we may be able to attach a meaning to it, or not, and that wasn't what I was going after. I was going after a relationship between information and semantics that can be associated with it, but trying to provide a transition point from the idea of data structures to the idea of objects, for someone just learning about OOP. Doing a sleight of hand may not do the trick. My starting point was to use a very interesting concept, when I encountered it, in SICP, where it discussed using procedures to emulate data, and everything that can be done with it. It seemed to help explain for the first time what “code is data” meant. It illustrated the inversion I was talking about: https://tekkie.wordpress.com/2010/07/05/sicp-what-is-meant-b… “In 2.1.3 it questions our notion of “data”, though it starts from a higher level than most people would. It asserts that data can be thought of as this constructor and set of selectors, so long as a governing principle is applied which says a relationship must exist between the selectors such that the abstraction operates the same way as the actual concept would.” It went on to illustrate how in Lisp/Scheme you could use functions to emulate operations like “cons”, “car”, and “cdr”, completely in procedural space, without using data structures at all. This is what I illustrate with “2 + 2”, and such, that code is doing everything in this operation, in OOP. It's not a procedure applied to two operands, even though that's how it looks on the surface.

alankay on Sept 21, 2017 [–]

Yes, the SICP follows “simulate data” ideas much further back in the past, including the B5000 computer, and especially the OOP systems I did. But the big realization is that there are very few things that are helpful when they are passive, and the non-passive parts are the unique gift of computing. The question is not whether ideas from the past can be simulated (easy to see they can be if the building blocks are whole computers) but what do we “mean by 'meaning' ”? Good answers to this are out of the scope of HN, but we should be able to imagine -processes- moving forward in both our time and in simulated time that can answer our questions in a consistent way, and can be set to accomplish goals in a reasonable way.

mmiller on Sept 19, 2017 [–]

I was using “data” in the spirit of a saying I heard many years ago in CS, that, “Data is code, and code is data.” It seems that people in CS are still familiar with this phrase. I was focusing on the latter part of that phrase. I was trying to answer the question that I think is often implied once you start talking to people about real OOP, “What about data?” I almost don't like the term “data” when talking about this, because as you say, it gets one away from the focus on semantics, but whenever you're talking to people in the computing field, such as it is, I think this question is unavoidable, because people are used to thinking of code and information as separate, hence the notion of data structures. People need a way to translate in their minds between what they've done with information before, and what it can be. So, I used the term “data” to talk about “literal objects” (like “2”, or other kinds of input), but I was using the description of “processors” (ie. computers), “containing procedures,” which can also be thought of as “operators.” I think the idea of an “inversion” is quite apt, because as you've said before, the idea of data structures is that you have procedures acting on data. With real objects, you still have the same essential elements in programming, the same stuff to deal with, but the kinds of things programmers typically think about as “data” are objects/computers in OOP, with intrinsic semantics. So, you're still dealing with things like “2”, just as procedures acting on data do, but instead of it being just a “dead” symbol, that can't do anything, “2” has semantics associated with an interface. It's a computer.

shalabhc on Sept 20, 2017 [–]

We had planned that the interior of objects should be an “address space of objects” because this is a better and more recursive way to do modularization

Something that nags me in the back of my mind is that messages are not just any object, they always have the selector attached. Why not let objects handle any other object as a message? Is this what you mean by the above? Thinking about the biological analogy (maybe taking it too far…): the system of cells is distinct from the system of proteins inside the cells and going up the layers we have the systems of creatures. So the way proteins interact is different from how cells interact, etc. but each system derives its distinct behaviors from the lower ones. Also, the messages are typically not the entities themselves but other lower level stuff (cells communicate using signals that are not cells). So in a large scale OO system we might see layers of objects emerge. Or maybe we need a new model here, not sure.

alankay on Sept 20, 2017 [–]

Take a look at the first implemented Smalltalk (-72). It implemented objects internally as a “receive the message” mechanism – a kind of quick parser – and didn't have dedicated selectors. (You can find “The Early History of Smalltalk” via Google to see more.) This made the first Smalltalk “automatically extensible” in the dimensions of form, meaning, and pragmatics. When Xerox didn't come through with a replacement for the Alto we (and others at Parc) had to optimize for the next phases, and this led to the compromise of Smalltalk-76 (and the succeeding Smalltalks). Dan Ingalls chose the most common patterns that had proved useful and made a fixed syntax that still allowed some extension via keywords. This also eliminated an ambiguity problem, and the whole thing on the same machine was about 180 times faster. I like your biological thinking. As a former molecular biologist I was aware of the vast many orders of magnitude differences in scale between biology and computing. (A typical mammalian cell will have billions of molecules, etc. A typical human will have 10 Trillion cells with their own DNA and many more in terms of microbes, etc.) What I chose was the “Cambrian Revolution Recursively”: that cells could work together in larger architecture from biology, and that you can make the interiors of things at the same organization of the wholes in computing because of references – you don't have to copy. So just “everything made from cells, including cells”, and messages made from cells, etc. Some ideas you might find interesting are in an article I wrote in 1984 – called “Computer Software” – for a special issue of Scientific American on “Software”. This talks about the subject in general, and looks to the possibility of “tissue programming” etc.

alankay on Sept 20, 2017 [–]

I should have mentioned a few other things for the later Smalltalks. First, selectors are just objects. Second, you could use the automatic “message not understood” mechanism to field an unrecognized object. I think I'd do this by adding a method called “any” and letting it take care of arbitrary unknown objects …

shalabhc on Sept 21, 2017 [–]

adding a method called “any”

Right, I understand there are ways to do this with methods but my question was more about the purity aspect, which you already addressed above.

alankay on Sept 21, 2017 [–]

A selector is an object – so that is pure – and its use is a convention of the messaging, and the message itself is one object, that is an instance of Class message. What's fun is that every Smalltalk contained the tools to make their successors while still running themselves. In other words, we can modify pretty much anything in Smalltalk on the fly if we choose to dip into the “meta” parts of it, which are also running. In Smalltalk-72, a message send was just a “notify” to the receiver that there was a message, plus a reference to the whole message. The receiver did the actual work of looking at it, interpreting it, etc. This is quite possible to make happen in the more modern Smalltalks, and would even be an interesting exercise for deep Smalltalkers.

shalabhc on Sept 22, 2017 [–]

A selector is an object – so that is pure – and its use is a convention of the messaging

The selector 'convention' is hard coded in the syntax - this appears to elevate selector based messaging over other kinds. But now I'm rethinking this differently - i.e. selectors isn't part of the essence, but a specific choice that could be replaced (if we find something better.)

mmiller on Sept 25, 2017 [–]

I can't remember if I've brought this up already in this thread, but if you want to “kick the tires” on ST-72, Dan Ingalls has an implementation of it up on the web. It's running off of a real ST-72 image. I wrote about it at https://tekkie.wordpress.com/2014/02/19/encountering-smallta… I include a link to it, and described how you can use it (to the best of my knowledge), though my description was only current to the time that I wrote it. Looking at it again, Ingalls has obviously updated the emulation. The nice thing about this version is it includes the original tutorial documentation, written by Kay and Adele Goldberg, so you can download that, and learn how to use it. I found that I couldn't do everything described in the documentation. Some parts of the implementation seemed broken, particularly the class editor, which was unfortunate, and some attempts to use code that detected events from the mouse didn't work. However, you can write classes from the command line (ST-72 was largely a command-line environment, on a graphical display, so it was possible to draw graphics). If you take a look at it, you will see a strong resemblance to Lisp, if you're familiar with that, in terms of the concepts and conventions they were using. As Kay said in “The Early History of Smalltalk,” he was trying to improve on Lisp's use of special forms. I found through using it that his notion of classes, from a Lisp perspective, existed in a nether world between functions and macros. A class could look just like a Lisp function, but if you add parsing behavior, it starts behaving more like a macro, parsing through its arguments, and generating new code to be executed by other classes. The idea of selectors is still kind of there, informally. It's just that it takes a form that's more like a COND construct in Lisp. So, rather than each selector having its own scope, as in later versions, all of them exist in an environment that exists in the scope of the class/instance. After using it for a while, I could see why they went to a selector model of message receipt, because the iconic language used in ST-72 allowed you to express a lot in a very small space, but I found that you could make the logic so complex it was hard to keep track of what was going on, especially when it got recursive.

shalabhc on Sept 25, 2017 [–]

Sweet, thanks! There's also the ST-78 system at https://lively-web.org/users/bert/Smalltalk-78.html

existed in a nether world between functions and macros

Macros are just functions that operate on functions at 'read-time', from my POV. So if you eliminate the distinction between read-time and run-time, they're the same.

It's just that it takes a form that's more like a COND construct in Lisp.

And even COND isn't special, it's just represented as messaging in Smalltalk, right?

you could make the logic so complex it was hard to keep track of what was going on

Interesting, I see.

mmiller on Sept 26, 2017 [–]

“And even COND isn't special, it's just represented as messaging in Smalltalk, right?” Right. What I meant was that the parsing would begin with “eyeball” (ST-72 was an iconic language, so you would get a character that looked like an eye viewed sideways), and then everything after that in the line was a message to “eyeball,” talking about how you wanted to parse the stream–what patterns you were looking for–and if the patterns matched, what messages you wanted to pass to other objects. That was your “selector” and method. What felt weird about it, after working in Squeak for a while, is these two concepts were combined together into “blobs” of symbolic code. You would have a series of these “messages to eyeball” inside a class. Those were your methods. The reason I said it was similar to COND was it had a similar format: A series of expressions saying, “Conditions I'm looking for,” and “actions to take if conditions are met.” It was also similar in the sense that often that's all that would be in a class, in the same way that in Lisp, a function is often just made up of a COND (unless you end up using a PROG instead, which I consider rather like an abomination in the language). In ST-72, there's one form of conditional that uses a symbol like “implies” in math (can't represent it here, I don't think), and another where you can be verbose, saying in code, “if a = b then do some stuff.” But what actually happens is “if” is a class, and everything else (“a = b then do some stuff”) is a message to it. Of course, you could create a conditional in any form you want. In ST-80, they got rid of the “if” keyword altogether (at least in a “standard” system), and just started with a boolean expression, sending it a message. a = b ifTrue: [<do-one-thing>] ifFalse: [<do-something-else>]. They introduced lambdas (the parts in []'s) as objects, which brought some of the semantics “outside of the class” (when viewed from an ST-72 perspective). It seems to me that presents some problems to its OOP concept, because the receiver is not able to have complete control over the meaning of the message. Some of that meaning is determined by partitioned “blocks” (lambdas) that the receiver can't parse (at least I don't think so). My understanding is all it can do with them is either pass parameter(s) to the blocks, executing them, or ignore them. One of the big a-ha moments I had in Smalltalk was that you can create whatever control structures you want. The same goes for Lisp. This is something you don't get in most other languages. So, a temptation for me, working in Lisp, has been to spend time using that to work at trying to make code more expressive, rather than verbose. A positive aspect of that has been that it's gotten me to think about “meanings of meaning” in small doses. It creates the appearance to outsiders, though, that I seem to be progressing on a problem very slowly. Rather than just accepting what's there and using it to solve some end goal, which I could easily do, I try to build up from the base that's there to what I want, in terms of expression. What I have just barely scratched the surface of is I also need to do that in terms of structure–what you have been talking about here.

alankay on Sept 23, 2017 [–]

It's an extensible language with a meta system so you can make each and every level of it do what you want. And, as I mentioned, the first version of Smalltalk (-72) did not have a convention to use a selector. The later Smalltalks wound up with the convention because using “keywords” to make the messages more readable for humans was used a lot in Smalltalk-72.

shalabhc on Sept 19, 2017 [–]

I've seen some of Bret Victor's talks but your description made it click!

the source code is not technically stored with the class object

This is more of an optimization choice, perhaps? Given the link is maintained, it might be OK to say the source code and the bytecode form are two forms representing the same object?

alankay on Sept 20, 2017 [–]

Nothing is technically stored with the class object (everything is made from objects related by object-references). Semantically everything is “together”. Pragmatically, things are where they need to be for particular implementations. In the early versions of Smalltalk on small machines it was convenient to cache the code in a separate file (but also every object – e.g. in Smalltalk-76) was automatically swappable to the disk – just another part of the pragmatics of making a very comprehensive system run on a tiny piece of hardware.

mmiller on Sept 19, 2017 [–]

“This is more of an optimization choice, perhaps? Given the link is maintained, it might be OK to say the source code and the bytecode form are two forms representing the same object?” Sure. :)

gone35 on Sept 18, 2017 [–]

9:05-12:30 I'm sold. A mindblowing vision for what the universality of computation can truly do… Thank you.

alankay on Sept 18, 2017 [–]

Actually, just what could be done 40 years ago. Much more can be (and should be) done today. That's the biggest point.

WoodenChair on Sept 17, 2017 [–]

It is a privilege of modern media, like HN, that we get to interact with someone like Alan Kay. Just 30 years ago interacting with a person of great importance to a field only happened if you were lucky enough to work with them or go to school where they taught, which is just a lottery. And it's not just Alan Kay, there are many amazing visionaries who stop by HN. It truly is a privilege that we should not take for granted.

coldtea on Sept 17, 2017 [–]

One question for Alan, if he's reading on (and of course thanks for decades of great work and inspiration). Do you think that your criticism against passive consumption etc might be incompatible not just with how things like modern computers and platforms are designed, but also with how most people are wired to behave? That is, that it's not just that our tools constrain us to being passive consumers but that we also prefer, promote, and seek tools that make it easier for us to be passive consumers, because most of us rather not be bothered with creating? If this is true, this might be (a) an inherent condition of man everywhere, or (b) something having to do with our general culture (above and beyond our tools). I think that by talking about schools etc you alluded to (b), but could (a) also be the case? Edit: I see that later on here you commented “We are set up genetically to learn the environment/culture around us. If we have media that seems to our nervous systems as an environment, we will try to learn those ways of thinking and doing, and even our conception of reality.” which kind of answers my question.

alankay on Sept 17, 2017 [–]

One perspective on this is to think about your definition of “civilization” and compare it with both “human universals” and what we know about the general lives of hunter-gatherers and the extent that we can use this to guess about the several hundred thousand years before agriculture. Many of the things on my list for “civilization” are not directly in our genes or traditional cultures: reading and writing, deductive mathematics, empirical science of models, equal rights, representative governments, and many more. It's not that we do these well, or even willingly, but learning how to do them has made a very large difference. We can think of “civilization” not as a state of being that we've achieved, but as societies that are trying to become more civilized (including getting better ideas about what that should mean). Most of the parts of civilization seem to be relatively recent inventions, and because the inventions are a bit more distant from our genetic and cultural normals, most of them are more difficult to learn. For example, as far as history can tell, schools were invented to concentrate the teaching of writing and reading, and they have been the vehicle for the more difficult learning of some of the other inventions. And, sure, from history (and even from looking around) we can see that a very high percentage of people would be very happy with servants, even slaves, whether human or technological. In Tom Paine's argument against the natural seeming monarchy in “Common Sense” he says don't worry about seems natural, but try to understand what will work the best. His great line is “Instead of having the King be the Law, we can have the Law be the King”. In other words, we can design a better society than nature gives rise to, and we can learn to be citizens of that society through learning. In other contexts I've pointed out that “user friendly” may not always be “friendly”. For example, the chore of learning to read fluently is tough for many children, but what's important beyond being able read afterwards is that the learning of this skill has also forced other skills to be learned that bring forth different and stronger thinking processes. Marketing, especially for consumers, is aimed at what people -want-, but real education has to be aimed at what people really -need-. Since people often don't want what they need, this creates a lot of tension, and makes what to do with early schooling a problem of rights as well as responsibilities. One way to home in on what to do is to – just for a while – think only about what citizens of the 21st century need to have between their ears to not just get to the 22nd century, but to get there in better shape than we are now. Children born this year will be 83 in 2100. What will be their fate and the fate of their children? If people cannot imagine that the situation they are in had to be invented and worked at and made, they will have a hard time to see that they have to learn how to work the garden as they become adults. If they grow up thinking as hunter gatherers they can only imagine making use of what is around them, and to move on after they've exhausted it. (But there is not place to move on to for the human race – larger thoughts and views have to be learned as part of schooling and growing up.)

coldtea on Sept 17, 2017 [–]

Thank you for the reply. I think you hit the nail on the head – even though making the distinction between what people “want” and “deserve” is not very popular in the US (or where I live and anywhere else for that matter) at the moment (or even since the 70s or so). Despite all the self-improvement books and seminars and everything, the norm is that people should just be given what they want, and not anything harder but that can potentially make them better and more creative. In fact a person would be labelled “elitist” to dare suggest anything else – e.g. that universities are not just vocational schools for getting the skills for one's future job, or that technology should not just be about what Joe Public is capable of mastering, but about advancing Joe and Jane public.

alankay on Sept 18, 2017 [–]

It's not about “deserve” but about what people “need” to be part of a civilization and help sustain and grow it. We tend to lose sight of what is not happening right in front of us and take the larger surround as a given. (But it had to be created from scratch, and not just made but tended and sustained and grown. This is one of the main original reasons for having schools in the US: to help create enough sophistication so that thinking, arguing and voting can be done to make progress rather than just for individual gain.) The simplest ways to think about all this is through the “Human Universals” ideas (more nicely put than “Lord of the Flies”!). Our natural tendencies via genetics and some remnants in our common sense culture is to be oral, social,tribal, act like hunter-gatherers, etc. (With reference to the last idea, consider the general behavior of businesses, of “shopping”, etc.) It's not about “creativity” per se – this has been misunderstood. It's about helping children become part of the “large conversation” beyond just vocational training. (Consider adding becoming a citizen, responsibilities to growing the next generation (whether or not you have children), gathering and participating in “richness of life”, etc.)

mmiller on Sept 19, 2017 [–]

“In fact a person would be labelled “elitist” to dare suggest anything else – e.g. that universities are not just vocational schools for getting the skills for one's future job, or that technology should not just be about what Joe Public is capable of mastering, but about advancing Joe and Jane public.” I've run into this very thing. The ironic thing about it is advocating for literacy, and understanding the abstract inventions of our civilization is the antithesis of elitism, because think about what would take place in a society where nobody who has the ability to influence events knows these things; where people do not know how to create their own analysis of events, to argue well, or to invent new social structures that benefit communities; to be active agents in negotiating their society's political destiny, keeping in mind the invented conceptual “guardrails” that maintain things like rule through law, and equal rights. In a society like that, elitism is all you have! Where the thought-inventions of our modern civilization are maintained, there is more possibility for creating an egalitarian society, in certain contexts, more power-sharing, and I think the evidence suggests more social progress. This doesn't get enough credit these days, but I think the historical evidence suggests certain forms of religion have helped promote some of these civilizational inventions as virtues, on a widespread basis, supporting the secular education that spread them. My experience has been that the reason people may put the label “elitist” on it is that learning these ideas is hard, and everybody knows that. So, there's been an understood implication that only a small number of people will be able to know most of them, and if most of the attention is devoted to inculcating these inventions in a relatively small number of people, then that will create an elite who will use their skills to the disadvantage of everyone else, who did not receive this education. What these people prefer to use instead is a lowest-common-denominator notion of egalitarianism, which doesn't promote egalitarian virtues in the long run, because it attenuates the possibility of maintaining a society where people can enact them. If we forget the hard stuff, because it's hard, we won't get the benefits they offer, over time. We will revert to a pre-modern state, because all the power to maintain our civilization is in these ideas. Keeping them as conventions/traditions is not good enough, because as we see, conventions get challenged via. a pragmatic, anti-tradition impulse with each new generation.

alankay on Sept 19, 2017 [–]

I think more needs to be done to understand why some really hard things to do – like hitting a baseball, shooting 3 pointers, etc. – are not considered elitist, and have literally millions of children spending many hours practicing them. (And note that it is hard to find complaints about the “unfairness” of sports teams trying to find and hire the very best players.) And, in the end, very few get really good (I think I saw somewhere that there are only about 70,000 professional athletes in the US – vs e.g. about 900,000 PhDs in the sciences, math, engineering and medicine.) Two ideas about this are that (a) these are activities in which the basic act can be seen clearly from the first, and (b) are already part of the larger culture. There are levels that can be seen to be inclusive starting with modest skills (cooking is another such). Music is interesting to look at. If the music is simple, we find the singing of songs with lyrics vastly more popular than instrumentals on the pop charts. But we also find “guitar gods” at the next and still high level of popularity. As the music gets more complex and requires more learning to be able to hear what is going on, the popularity drops off (and this has happened in many pop genres over the last 100 years or so). A lot of pop culture (I think) comes from teenagers wanting their place in the sun, and quickly. Finding a genre that's doable and can be a proclamation of identity – akin to trying to be distinctive with clothing or hair cuts – can be much easier than tackling a developed skill. I think a very large problem for the learning of both science and math is just how invisible are their processes, especially in schools. The wonderful PSSC physics curriculum from the 50s (https://en.wikipedia.org/wiki/Physical_Science_Study_Committ…) bridged that gap with many short films showing scientists doing their thing on topics and using methods that were completely understandable right from the first minutes. This made quite a difference to many high school students. I think I said somewhere in the blather of the interview that the easiest way to deal with the problems of teaching reading is to revert back to an oral society, especially if schools increasingly give in to what students expect from their weak uses of media. A talk I gave some years ago showed an alternating title line: between “The best way to predict the future is to invent it” and “The easiest way to predict the future is to prevent it”. The latter is more and more popular these days.

crispinb on Sept 19, 2017 [–]

I am calling for the field to have a larger vision of civilization and how mass media and our tools contribute or detract from it

That would be nice. Unfortunately outside of Kay's rarefied circles, “the field” is close to being a Darwinian selection machine for the worst (most unimaginative, trivial, greedy, pusillanimous) traits. I've ducked in and out of various IT and development roles since the early 2000s. Nearly every one of the best people I've known in that time has left the field for others where they felt they could develop more of their broad humane selves. Some have retreated to academia, others moved altogether (nursing seems to be a theme, oddly). Those of the best that didn't leave, are embittered and cynical. For all the febrile contrepreneur-speak, the overall picture is pretty bleak.

alankay on Sept 20, 2017 [–]

See the comment by scroot above. It meshes with yours, but recognizes that the underlying culture has gone in directions that are undermining real progress and value of life. This is why I've put a fair amount of time along with many others in trying to get elementary education to a better place. I think we'll have to grow a few generations of “civilization carriers” to pull out. (And, yes, I've been staggered at the low place that much of computing has gotten to, but it is reflecting much larger cultural problems.)

crispinb on Sept 20, 2017 [–]

Yes, wholeheartedly agree with this being part of a wider cultural current. It feels like we're at a low point from which things can only improve, but I have thought that before, only to see the decline continue. My comment was a mere crise de coeur from the bottom ranks of the industry hierarchy. Reading the article and comments here was a slight shock. I had almost forgotten that these high-minded humanistic strands had once been a prominent part of the tech world. I hope they re-emerge. Thanks for your contributions.

alankay on Sept 21, 2017 [–]

Take a look at the how the US responded to WWI re “free speech” etc, and you'll see that we haven't hit bottom yet. But we have fulfilled H.L. Mencken's 1919 prophesy: “As democracy is perfected, the office of president represents, more and more closely, the inner soul of the people. On some great and glorious day the plain folks of the land will reach their heart's desire at last and the White House will be adorned by a downright moron.” H.L. Mencken, 1919

excerpt 2

da02 on Sept 20, 2017 [–]

Yes. But, even if the field adopted a larger/better vision, you still have a bigger problem: Most smart people are attracted to mediocrity. Example: The LFTR (ie molten salt breeders) people have been trying to get their stuff adopted but most smart people (I know of) are attracted to micro-optimizing wind mills and outlawing coal. Even smart people who want to take on a larger vision are still able to snatch mediocrity from the jaws of succes. It may be counter-intuitive, but Hans Hoppe's views on how to improve civilizaltion would lead to a fulfillment of your goals. It's not a coincidence Japan, Germany, US, Italy were centralized in the 1800s and became involved in world wars in the 1900s. Medieval Europe had no central empire, yet was able to catch up and surpass China and Japan, with the Renaissance a culmination. A lack of mega states lead to more diversity, cooperative competition and a stable global civilization through variety and choice. If we are to repeat past success and get more Faradays and Teslas, micro states or no states are necessary. The EU is an example of this: A poltical wing of NATO. First they get you by passing regulations protecting the “consumer”, then the wars and corruption come about. Goldman Sachs made money when the wealthier states wanted to pay off the debts of its clients (Greece). This is no different when Clinton/Congress wanted to bailout Mexico (who owed lots of money to US banks like Citibank/Citigroup). Singapore amd Lichentenstein both have a powerful de facto monarchy in control, but its small land size and population limits the destructive desires of the state. Yet look at the diversity of race/opinions/demographics that exist in such a small area. Until the fetish for megastates subsides, we are doomed to slow adoption of knowledge. However, I am not a good communicator. Might I suggest this book for a better scholarly discussion: https://mises.org/library/economics-and-ethics-private-prope

alankay on Sept 21, 2017 [–]

My goals are a lot bigger than Hoppe's (and I fervently hope for something much better than his vision).

da02 on Sept 21, 2017 [–]

Could you explain the “are a lot bigger” part? I get the sense Hoppe is repulsive to you, but I am very open-minded to hearing a counter-argument and find out what I got wrong about him. For example, are you against a complete free-market in schooling? Or do you want a voter-based approach on the city/state/federal levels?

alankay on Sept 22, 2017 [–]

My perspective is that our planet is small but there are lots of people. “Biology is variation” so there are wide distributions of properties. I like the idea of “Equal Rights” (and also think that it's necessary). The systems that are critical, including the human systems, are non-linear and intertwined. The combination of these is that the human society needs to find - invent - how to organize itself. This is very much in the spirit of the thinking that led to the American Constitution, and Tom Paine's “Instead of having the King be the Law, we can have the Law be the King”. We need solutions that allow further thinking and design to be done. One way to look at this is to ask questions about “human nature” and to what extent does it need to be followed and to what extent should we try to teach (even train) the children to act in “designed ways” – for example, not trying to take revenge. The interesting and difficult parts of goals like these are that we have humans in the mix while trying to come up with better societal designs. For example, even the best forms of socialism have been “gamed” incessantly, and often fatally, by humans only interested in “harvesting” for themselves. This is one of many reasons why well thought out versions of social reform have often been destroyed when the implementation phase is started “Every one loves Change, except for the Change part!”

da02 on Sept 22, 2017 [–]

I can not argue with any of your vision. It's based on sound thought, knowledge, and scholarly research. However, I disagree with your examples, specifically the Constitution. I apologize for disagreeing with you, but I still don't see how any form of state can function with a population in the double digits of millions. Even if 90%+ of the population were educated in Montessori-based systems. It goes against the basic economic principles of “incentives”. The state is still a monopoly (in the classical sense) of violence, even under a Constitution. The state is also the sole interpreter of its own laws. Even under a Constitutional Republic, politicians have become the new monarchy, priesthood, and witch doctors. Just like when Christians find a backdoor for polytheism, so do the Citizens the same at the voting booth and in the media to worship kings and queens. So the state is still the king. We just play musical chairs at each election. In fact, we even have fewer checks/balances on power today than the British Empire had under American Colonial rule. The US government has absorbed private institutions that could check it's power: It' subsidizes church(s), subsidizes and regulates private schools, the Ivy Leagues have become an extension of the state, and the US has become the #1 employer of scientists, engineers, lawyers, etc. This is a combo of socialism and fascism lite, that continues to grow through each election and generation, albeit slowly. When the Law is King, people still rely on politicians to interpret and carry out the Law. They will defend “their” neo-king/queen (ie favored politican). Previously, they would have been suspicious of unelected officials.” That suspicion was a de facto check/balance on power. But, not the sole check/balance. Example: Obama (and Congress!) passed legislation to support Neo-Nazis in the Ukraine. https://www.stpete4peace.org/Ukraine https://www.thenation.com/article/congress-has-removed-a-ban… Where was the outcry? Trump is bad. But, Congress and Obama are just as bad. These are the politicians people want in power. These are the crimes people overlook and ignore. This is just one more example of how the voters let senators and presidents get away with crimes. It is easy to expose Trump as a shady/scheming politician and crooked entrepreneur (eg Trump Uni.). However, politicians like Bill Clinton, Reagan, etc. get away with murder and Congress is fine with it. So too are the voters. The Constitution has not prevent a global military empire. Look at how the Constitution was promoted: with G. Washington's celebrity endorsement. The US was thus created partly by a militia leader who used warfare to overthrow a government and won wars. (Losing generals are unpopular.) In practice, the Law was never in full hands of the King. The King/monarchy had checks and balances going back to the medieval periods. The various religions and churches (plural), guilds, Parliament, and so on. The Constitution has given people a false sense of ownership and protection and lowered their distrust of the state. This is one of the reasons why parents love government schooling: they can blame tax-cheats for why Johnny can't read. “Who cares if the Dept. of Edu. is unconstitutional? The ends justify the means.” Canada, Australia, and other states did not overthrow British rule, yet they still became multi-cultural and prosperous… without a violent Revolution that led to war widows, orphans and high taxes. This is why I disagree with the idea of a state that would let politicians (including monarchs or dictators) have access to lots of money, weapons, etc. I've seen people much better educated and with high IQ ignore the crimes of entire political parties like Republicans, Democrats, and celebs like Lenin, Che, Mao, etc. In other words: The Constitution, from my p.o.v., promoted political centralization and was a hidden psuedo-absolute monarchy in disguise. Nobles replace the absolute monarch. So I agree with your vision, but I just disagree with political centralization. (Sidenote: The Constitution was passed despite the delegates not having authority to overthrow the Articles of Confederation. The Constitution expanded the state's power, did not solve the underlying issues with the Articles of Confederation, and set a bad precedent of upper-class members of the establishment being able to expand state power to solve problems at the expense of freedom. ) Thank you again for reading and putting up with this. I realize we disagree on a few things, but your patience is appreciated.

alankay on Sept 23, 2017 [–]

I'll avoid trying to reply to this. The Roman poet Juvenal quipped “But who will guard the guardians?” referring to one of the main problems of any republic. Plato had one suggested solution, and the US founders had another. Today we have something quite different than either had imagined. One key question for “civilization” has usually centered around the extent to which enough children can learn to reach beyond their genes to embrace ideas and behaviors that have been invented for the better. Another key question revolves around the trade-offs between individual choices vs “smartest choices”. This reflects the distribution of talents, outlooks, skills, knowledge, etc in any population. And also the distribution of what various people need to “feel whole”. (These are often at odds with larger organizations of societies.)

da02 on Sept 23, 2017 [–]

I think I'm beginning to understand. Thanks again for the replies. I'm going to have to think a lot of about what you have written. (My apologies if I kept mis-interpreting you and going off on the wrong tangents.)

alankay on Sept 24, 2017 [–]

When Neil Postman was a grad student he followed Marshall McLuhan around for a few months. One thing he noted was that McLuhan – when argued with or when asked a question never directly replied, but just came out with another one of his zinger “koans”. Neil said he finally realized that McLuhan was not concerned about whether people were agreeing with him or even understanding him, but was most aimed at getting them to think at all! I've never been able to pull this off, but McLuhan had a real point. Socrates had the idea (via Plato) that there “was truth” and careful thinking would get everyone to the same place. (Much of science has this assumption (if you throw in a lot of experimentation and debugging as part of the careful thinking). In any case, a reasonable explanation in science is not in the form of sentences. One of the key ideas here is that modern understanding is a lot more than changing from one set of sentences to another. (This is a huge problem for humans because for tens of thousands of years and more there were no significant differences between our models and our sentences.) Now some of our models can't be reasonably represented in sentences, but for many cases we still have to use sentences to point at the models (or don't even use the models at all). To me, the consequence of all this is that most of the work needed to be done on thinking about important difficult problems is not primarily “logic” (in the sense of dealing with premises, operations, and inferences) but “extra-logical” (trying to understand contexts and boundaries and models and tests before trying to do anything like classical thinking). I often try to point out in talks that modern thinking is “not primarily logical” and this is what I'm driving at.

da02 on Sept 24, 2017 [–]

modern thinking is “not primarily logical” and this is what I'm driving at.

Is that because “logical” thinking makes sense in one context, but fails in a better context? (ie. Pink vs blue: Change the context, understand it, and then later the knowledge is found?)

alankay on Sept 25, 2017 [–]

Yes, it's not the apparent logic in the operations that counts but the choice of definitions (and these include the definitions of the operations). I wrote a paper for the “Mind-Body Conference” in 1975 that discussed this. The idea is that “reasonable things” are done within “stable neighborhoods of 'truth' ” that can be thought of as regions whose boundaries are the definitions. Inside we pretend the definitions are true, while the larger view from above knows the neighborhood is arbitrary. This is an old idea (e.g. Euclid). The “modern” part of it is that the definitions are not assumed to be true outside the neighborhood. This is simple and powerful because the results are larger worlds that can be compared to others and to phenomena and experiment (and without setting up dogmas and religions). Because of the way human minds work, there will be tendencies to think the definitions are “actually true” (and so the logic inside the boundaries) if they and the conclusions are appealing. But the form of this knowledge helps keep us saner if we are diligent about drawing the maps and boundaries correctly. Much of science has this character, and the model helps to understand what it means to “know” something scientifically. Science is a negotiation between “what's out there?” and what we can represent inside our heads via phenomena on the one hand and the “boundaries and neighborhoods” on the other. Einstein's nice line about “math vs. reality” hits it perfectly (“As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality.”). Newton's Principia was a huge step along these (and many other lines). He completely separates out the math part in the first large parts of the work. And only then does he start looking to see how the math models map onto observed phenomena. To say it again, Science is partly about being very careful about how the definitions map to “out there”. In terms of context, if you are aware that you are in contexts – the first step! – and aware and careful about the ones you are using – the next steps! – there is a chance that “reasonable thinking” might happen.

da02 on Sept 25, 2017 [–]

Science is partly about being very careful about how the definitions map to “out there”.

I keep tripping on this part. Definition as in understanding the boundary? As in trying to find the boundaries between better, perfect, and the impossible (ie sweet spot)?

Because of the way human minds work, there will be tendencies to think the definitions are “actually true” (and so the logic inside the boundaries) if they and the conclusions are appealing.

Does this mean?: Humans are pre-disposed to a model of how the Universe works: Zeus's thunderbolts, witches, Saturn/Satan/Santa, Geo-centric, etc. Science is a set of guidelines to help prevent us from shoehorning the Universe into the inaccurate mental models we are pre-disposed to believing?

while the larger view from above knows the neighborhood is arbitrary

Can the leap from Geocentric model to Kepler's work be an example of this? As in: Geocentric model becomes irrelevant with Kepler's discoveries and p.o.v.?

alankay on Sept 25, 2017 [–]

1. The definitions are the boundary – these are what are used to make the interior. The analogy is to definitions (used to be called axioms) in math. Simple for math (because it is only about itself). Difficult for science because we can't just make up the definitions, we have to try to find ones that have some mappings to “out there”. A boundary for chemistry is the physics standard model. Within physics the model is pretty accurate but unsatisfying as far as knowledge. They would like to have a better boundary, and get the standard model (and the key constants) from it. But it makes a good boundary for chemistry to make excellent chemical models. Similarly (oversimplifying here) chemistry makes a good boundary of definitions for molecular biology. Note that in this scheme of thinking, the “knowledge and meanings” exist inside the boundary, but don't include the boundary. One of the issues addressed in this approach is how to make progress in “knowing” without infinite regresses. Philosophically, it is a kind of pragmatism. As I mentioned elsewhere, science is a negotiation between two different kinds of things not a set of truths. It has many things in common with mapping (and making good maps is a branch of science, and one of the real starts of real science). 2. We are predisposed to believe things. Bacon's notion of why we needed to invent a “new science” is to create a set of processes and heuristics that would help us deal with and get around to some extent “what's wrong with our brains”. As a young scientist, I got the warning that is given to most young scientists “Beware, you always find what you are looking for!” Some of the interesting examples of good science turning into belief revolve around Newton and both Maxwell's Equations and the orbit of Mercury (neither are “Newtonian”). And if you look at the history you'll see that Newton was a lot less Newtonian than many of his followers. And yes, we also seem to have some things that are easier to imagine than others – gods, demons, witches, etc seem easy, but future floods etc seem hard. 3. Sure. E.g. if you think things have to be circles, then this will be part of your implicit context for thinking about orbits. Geocentric used circular orbits and then epicycles to correct them and save the theory (and some great metaphors there for lots of human thinking). But it's important to realize that Copernicus also used circular orbits, and they also used epicycles to save that theory. Kepler worked with Brahe and admired him, so decided to trust his measurements. This led to a different model. The planets themselves didn't care about any of the models. The definitions are still not -true- and the neighborhood is still not the phenomena. It's just better. You only get Newton from Kepler, not Maxwell or the orbit of Mercury and then Einstein for both.

da02 on Sept 25, 2017 [–]

A few years back, I wondered: Why is it I believe in things no else agrees with? I seemed to look at the different schools of thought, pick the best one that answers as many questions as possible, and believe in it until something better comes along (ie can it solve more problems than the previous school of thought). (But, I was at least somewhat aware it was not scientific or scholarly. A combination of logic, emotion, and preferences.) My peers in HS who went on to higher education (Stanford, Yale, Harvard, Worcester, MIT, etc.,) agree with the status quo, work within it, and have careers, children, so on. (Nothing wrong with that and there is no professional jealously on my part.) I suspect, however, they are shipbuilders, who are treated as explorers. Scientific researchers who find juicy hacks and turn them into over-priced drugs with dangerous side-effects. You, however, take on the far superior approach to accumulating and developing new knowledge. However, it seems to come more naturally to you than to your peers. (Even taking into account your mathematics/musician parents and good teachers you encountered.) Which cities and universities around the world have been most receptive to your lectures? (I would assume it would be some in Canada, Scandinavian countries, and China. With the least receptive being in the US.) How was poor Faraday able to contribute so much to science and technology despite the vast resources of the classical educated members of The Royal Institution and Royal Society? (Granted, there were many people who contributed before/during/after Faraday's time to allow him to make those discoveries. Then it took others like Maxwell to carry on even further.)

alankay on Sept 26, 2017 [–]

In 2004 I wrote a tribute to the research community I grew up in “The Power of the Context” (http://www.vpri.org/pdf/m2004001_power.pdf) and this will possibly help with some of your questions. I was just one of many in this community, and as I said at a recent Stanford lecture “The goodness of the results depends primarily on the goodness of the funders”. Every era has enough of the kind of people who like to “ask what would real progress be?” and then try to make it happen. The large differences in “real progress” have fluctuated as the good funding has fluctuated (right now and for quite a few years, there has been almost none). A very important first step in this is to put a lot of work into “learning to see the present” and then where it came from (this is a lot of work, and it's not what human minds generally want to do). This will free up most thinking, will open up many other parts of the useful past, and especially about much better futures.


User Tools