I love WordPress. I have loved using it as my own personal printing press in this little cul-de-sac of the web since 2006. But over the course of the past two years, the ratio of time spent fixing WP security problems to time spent writing public-facing content has gotten lop-sided. So lop-sided that I want to clean things up—and take a break from WordPress.
Fewer databases; fewer theme folders; fewer plugins. I just want less stuff collecting dust in my hosting account. And I want that stuff to be more secure. So I'm taking this site back to pure HTML + CSS. No PHP. No SQL. Nothing under the hood. Or rather, nothing under the hood on the server side.
During the course of the last two years, I also spent a significant chunk of time tinkering around with Python—I had a blast hacking my way through most of Zed Shaw's Learn Python the Hard Way and P2P University's Mechanical MOOC on Python. I admit that I completed neither. But I got comfortable enough with Python that when I went looking for static site frameworks, the one that caught my eye was Pelican.
Pelican takes in content written in minimalist markup styles (like Markdown), runs over it with a series of Python scripts, and outputs a set of static HTML files. Original content and output are separate, as is the set of theme templates.
This is much less flexible than WordPress for sure. I had to blindly install a passel of Python libraries to make the whole thing go. Updating requires re-running the whole site generation script and then re-uploading the whole site to my hosting account. In many ways, this is taking things back to how I built sites in 2004.
While getting to tinker more with what's under the hood is one draw, and while the simplicity of having a static site without extra appliances tucked in the cabinets is another, the bigger point here is security. Without any scripts sitting in accessible web folders, and without a database at all to worry about, I can spend less time on security patches and chasing malicious code injected into obscure SQL tables. And that means more time to do what I've always liked doing on the web: sharing projects and ideas that I find exciting. At the moment, one of those projects is certainly Pelican.
Image via wikimedia commons.
There are two books sitting on my bedside shelf that I have been reading in short chunks for a long time. One is The Evolution of Childhood by Melvin Konner. The other is The Richness of Life, acollection of essays and book excerpts by Stephen Jay Gould. One is essentially a textbook; the other is full of stories. Both are great books about science in their own right. I'll eventually make it through each. But which one will a remember better? Will I learn more from the expository text or the narrative essays?
Probably the one with the stories, and there's research to support that hypothesis. Prof. Daniel Willingham at UVa writes on his blog about a new paper explaining an experiment with middle school students reading two different science texts on Galileo or Marie Curie.
Students read either a straightforward expository text about the work of one scientist or the other, or the students read a narrative text explaining Curie's of Galileo's work. The students who read the story retained more information shortly after the reading and then again when tested a week later.
This is an interesting finding in and of itself, but Willingham pushes things further, arguing, "I'd like to broaden the view of 'narrative'":
You don't have to think of narrative just as the story of an individual or group of people; you can think more abstractly conflict, complications, and the eventual resolution of conflict as the core of narrative structure.
In this formulation, narrative structure becomes a schema for organizing information. If a scientific story needs characters, conflict, complications, and resolution, then scientists are the characters, the problem or question is the conflict, complications and rising action are the fabric of the experimental process, and the resolution is what a scientist learns.
But I couldn't read Willingham's synthesis and not think about the teacher professional development I've been doing recently. Much of the PD that I've created is purely expository: here are the ideas, the definitions, the information. But if explaining an education program requires generating a schema with which teachers can organize information, then perhaps a story of how an instructional approach was developed or worked in a real classroom would be the stronger approach. At the very least, it would likely mean more entertaining professional development time; I'd rather tell a story than just flip through slides.
Bret Victor wrote an essay in 2012 that left me desperately wishing I were a computer engineer. "Learnable Programming" was a critique of 1) Khan Academy's newly released intro course on programming, 2) the Processing language the course focused on, and 3) decades of stagnation in programming pedagogy. The essay was funny, visually stunning, provocative, and so convincing in its presentation of an effective foundation for how to teach programming to learners by showing them what their code was actually doing that one could easily be led to believe that anyone who'd even considered the question of how to teach programming before was asleep at the pedagogical wheel. The intellectual effect was something akin to a first encounter with Edward Tufte's suggestion that graphs should show information instead of junky non-information. It was brilliant in a way that makes your temples burn and your mouth keep murmuring, "Yes. Yes. Yes!" Computers are awesome. Education is awesome. Teaching students how to do powerful things with computers = Best. Thing. Ever.
Ergo, I desperately wished that I knew enough about programming to join whatever project Victor was about to suggest.
Pivotal to the essay was the (brief) intellectual history of older languages and computer environments explicitly designed to teach students about programming. In this, Victor was unequivocal on the importance of Mindstorms:
The canonical work on designing programming systems for learning, and perhaps the greatest book ever written on learning in general, is Seymour Papert's 'Mindstorms.'
Given the brainy rush induced by Victor's essay, I had no other choice than to follow his direct instructions, "For fuck's sake, read 'Mindstorms.'" So I ordered a used copy within minutes of reaching the bottom of his article.
Mindstorms was published in 1980, while Papert worked at MIT, so he uses terms like "cybernetics" in earnest and offers astounding facts like "in the past two years, over 200,000 personal computers have entered the lives of Americans" (p 181). So in that sense, the jargon and computational enthusiasm resonates with Tracy Kidder's The Soul of a New Machine.(I.e. it's very dated, but remember that you're reading this for history and theory.) Now set this in concert with Papert's vision for the role of computers in building learning environments for children: it is grounded firmly in his years of work with developmental psychologist Jean Piaget, a pioneer of constructivist education theory. The "build it yourself" and "ask lots of questions" spirit resonates with my 80s memories of LEGO sets and Sesame Street. Taken together, Papert's ideas, read three decades later, crystalize for me a certain utopian fetish for the intellectual, cultural, and political possibilities of kids screwing around with boxy, green-screened Apple IIes.
But on a more practical level, the book is full of clear-eyed distillations of how tinkering with computers can help teachers and students make thinking visible. Take, for instance, Papert's ideas here about the pedagogical power of "debugging" a computer program as a special case of tenacious learning-by-experiment:
The question to ask about the program is not whether it is right or wrong, but if it is fixable. If this way of looking at intellectual products were generalized to how the larger culture thinks about knowledge and its acquisition, we all might be less intimidated by our fears of "being wrong." This potential influence of the computer on changing our notion of a black and white version of our successes and failures is an example of using the computer as an "object-to-think-with." It is obviously not necessary to work with computers in order to acquire good strategies for learning. Surely "debugging" strategies were developed by successful learners long before computers existed. But thinking about learning by analogy with developing a program is a powerful and accessible way to get started on becoming more articulate about one's debugging strategies and more deliberate about improving them (p 23).
Thirty years on, there's a profusion of non-profits, projects, and start-ups trying to teach kids and adults alike to code. But what often goes unstated in the breathlessness about how cool it is to learn how to code is that fact that learning to code is, like learning to read and write, an extension of learning how to think. And learning how to think requires learning how to be "metacognitive"--that is, able to think about how your own ideas and thought processes work, so that you can find problems and correct them.
The LOGO interface allows users to draw using simple commands. Here's one way to draw a square:
FORWARD 100 RIGHT 90 FORWARD 100 RIGHT 90 FORWARD 100 RIGHT 90 FORWARD 100 RIGHT 90
This code it easy enough to decipher: go forward 100 units, turn right 90 degrees, repeat 4 times, and you've drawn 4 straight sides at right angles to one another.
But I believe that part of Victor's fascination with LOGO as a teaching tool lies in the simple metaphor of the Turtle. The Turtle is the stylus implied in the lines of code above. In LOGO, the "cursor" that moves around the screen, drawing your square (or whatever other shape) is called the "Turtle," and all the written commands in the code are simply instructions to the Turtle for where to go and what to do. The Turtle is a little metaphor that helps to crystalize the fact that writing an effective program is nothing more than figuring out how to provide a cute, determined animal with the right set of instructions.
But here's were things get cooler. Papert's team didn't just build LOGO software and use it to help students experiment with mathematical principals while drawing shapes on green computer screens. The were also real Turtles students could control using the exact same instructions. These real Turtles were dome-shaped motorized robots with retractable styluses in them that would draw programed shapes and images on swaths of paper laid out on the classroom floor.
The link between the simple mathematics of a computer program and the real images a student could create is a perfect example of constructivist learning. Tinker with something abstract, see the results in the real world. Repeat over and over and the learner's understanding improves.
Furthermore, the conceptual link between the instructions a student writes in a computer program and the visual results of that code is another fundamental element of how students learn. "An important part of becoming a good learner is learning how to push out the frontier of what we can express with words," Papert writes (p 96). Essentially, he's arguing that part of expanding what a student knows is forcing them to encounter the edges of their explanatory powers: the link between code and image is itself pushing that expansion. When a student's words are insufficient to explain what he or she knows, a key element of the learning process is acquiring new words, new concepts, and new grammars to explain it. And when there is such an intimate link between the new words (code) and the concepts they express (the program output), the boundaries of what the student can express expand.
The illustrate this point, consider another example from the LOGO Foundation's web page introducing the basics of the language. After explaining foundational concepts like how to draw a line and a square, the example introduces how to combine and repeat instructions to create a picture made by iterating a a drawing of a square over and over on top of itself, creating a pinwheel design that is difficult to describe in pure words, but which explodes onto the screen with just a few lines of code:
Papert's walks through several different analogies for how computational thinking can illuminate instructional situations. There's an extended discussion of how learning to juggle is a process of "debugging"--correcting many small isolated errors to get a sequence of actions to work. There's explanations of how computer environments to shape better physics instruction that helps students make connections between physical principles and their own experience of objects in the world--as opposed to simply forcing them to encounter physics through a set of abstract equations. But he also anticipates criticism of this push for teaching "computational thinking" with a powerful argument for how it expands cognitive ability:
In my view a salient feature of human intelligence is the ability to operate with many ways of knowing, often in parallel, so that something can be understood on many levels. In my experience, the fact that I ask myself to 'think like a computer' does not close off other epistemologies. It simply opens new ways for approaching thinking. … But true computer literacy is not just knowing how to make use of computer and computational ideas. It is knowing when it is appropriate to do so (p 155).
And perhaps most importantly, Papert believes that the process of learning computational thinking is necessarily a social process that facilitates and depends upon the interplay between student, learning objective, and teacher. Again, the process of debugging is powerful because it re-writes concepts about what it means to be "wrong" and helps students think metacognitively, but it also creates questions and topics of conversation for student/teacher interactions, where the student practices pushing out the frontiers of what he or she can express with words.
"In my vision the computer acts as a transitional object to mediate relationships that are ultimately between person and person," Papert writes in one of the concluding chapters (p 183). In this case, Victor's essay on Learnable Programming did just that: a maze of networked computers served up his ideas and enthusiasm for Mindstorms, and hopefully I've been able to capture some of that excitement for you, dear reader, on your computer.
Part II of my loooooong review of the Dept of Ed report on "Expanding Evidence" is up at Edsurge. (Part I here). This section is about what design research can look like in education action. I walk through some of the case studies in the report and link continual refinement through evidence gathering to teacher practice:
We should stop and emphasize that this process of data-driven continuous improvement is what highly effective teachers do in their classrooms everyday. Such teachers adapt their teaching to make it useful for everyday instruction and they gather data (usually as formative assessment results) to make constant improvements to their own pedagogy. In high-performing schools, teams of teachers collaborate to scale effective methods across departments and buildings.
I actually find it hard to summarize exactly what design thinking in education looks like (hence the tripartite review...),so take a gander at the post itself.
I've got a new review up on EdSurge this week. Instead of a product review, it's a long look at a big report on new ways of thinking about how to evaluate and develop education technologies. Here's the opening:
Late in December, the U.S Department of Education’s Office of Educational Technology dropped a 100-page draft policy report on “Expanding Evidence Approaches for Learning in A Digital World.” While a key focus of the report is on the kinds of information that we should marshal to evaluate learning technologies, the more important lesson of the document is about people. Through case studies and reviews of current research, the report makes a lot of recommendations, but three stand out.
Part I of this review provides a backdrop for current “evidence-based” research and focuses on the first of those recommendations: the notion that technologists, educators, and education researchers must collaborate to share techniques and evidence-gathering and analysis approaches from their respective fields. Parts II and III of the review, to be published separately, advocate two other major themes woven throughout the report: 1) the need for design thinking and continual improvement processes when building digital learning tools; and 2) the need to forge stronger lines of communication between education researchers, technologists, and educators, and to share insights that might otherwise remain siloed in existing disciplines.
You can read the rest of the report here: "Latest Department of Education Report Urges More Collaboration"