I came across this corpus of Trump speeches today. So naturally, I fed it into Karpathy’s RNN. Can a computer write an article? Probably no, not yet. But it certainly can do a Trump speech. Behold: MACPA406-SG:rnn shawngraham$ th sample.lua cv/lm_lstm_epoch18.09_0.9175.t7 -temperature 0.5 -primetext "I'm Donald Trump" I’m Donald Trump. I don’t want to do it. I want to thank you. I have a very sad the guy with the last time his one of the great respect for the Second Amendment.
Increasingly, archaeology data are being made available openly on the web. But what do these data show? How can we interrogate them? How can we visualize them? How can we re-use data visualizations? We’d like to know. This is why we have created the Open Context and Carleton University Prize for Archaeological Visualization and we invite you to build, make, hack, the Open Context data and API for fun and prizes.
Recently, there’s been a spat of openness happening in this here government town. This post is just me collecting my thoughts. First thing I saw: a piece in the local paper about the Canadian Science and Technology Museum’s policy on being ‘open by default’. The actual news release was back in April.
* and by sensible, I mean, not mucking about with footnotes. Can’t abide footnotes. But I digress. Earlier today, @archaeo_girl asked, Is there a good tool to take block of text and help isolate parenthetical citations within it, to make a list of sources? @electricarchaeo — Sarah Rowe (@Archaeo_Girl) June 16, 2016 And Sasha Cuerda came up with this: http://regexr.com/3dl04. This is awesome.
Caution: pot stirring ahead I’m coming up on my first sabbatical. It’s been six years since I first came to Carleton – terrified – to interview for a position in the history department, in this thing, ‘digital humanities’. The previous eight years had been hard, hustling for contracts, short term jobs, precarious jobs, jobs that seemed a thousand miles away from what I had become expert in.
I’ve been using Reveal.js for my course slide decks and for most presentations I’ve been giving of late. I like writing my presentation in markdown – quickly banging out ideas – and then feeding that markdown into Reveal. This has the added benefit that I can use Pandoc to create handouts or whatever super duper quickly. If you write your slidedeck for Reveal.js in html from the word go, you can of course do a whole lot more and take full advantage of what Reveal offers.
At dawn Rob Anybody, watched with awe by his many brothers, wrote the word: PLN on a scrap of paper bag. Then he held it up. ‘Plan, ye ken,’ he said to the assembled Feegles. ‘Now we have a Plan, all we got tae do is work out what tae do.” -Terry Pratchett, A Hat Full of Sky I’m reminded of this as I see retweeted a slidedeck by George Veletsianos from this year’s Congress of the Humanities, ‘Crafting a Research Agenda (in memes)’.
Not that I don’t have a zillion other things to do, but what the hell. Here’s how I got Pastec.io installed on my Mac laptop. (I have several thousand images from Instagram related to the bone trade. I’m hoping that Pastec can help me find/deduce/map/elucidate the visual grammar of all this). requires opencv, jsoncpp , and libmicrohttpd. If you’ve got homebrew installed: $brew install homebrew/science/opencv $ brew install jsoncpp (see https://github.
This post represents one stage in how I prepare for public speaking. I first begin by sketching out roughly what kinds of things I want to have up on the screen to support me in a reveal.js slidedeck. I feed it from a markdown source file, so I usually just bang out a number of headings or key thoughts or images in markdown. Then, I try writing out the rough argument of what I want to say.
I like Hypothes.is. I’ve used it to return feedback to students, and I’m trying to be more mindful about what I read online in that I should annotate the damned stuff and keep everything handy in one locale. Since you can get a stream of your annotations, one way I’ve tried to keep things together is to put the feed on my open notebook. It works, but it ain’t pretty.
In my email this morning: What are the two or three tools you would consider most useful for students to learn about in the arts/humanities, regardless of their discipline and with a focus on what would be useful in doing ‘traditional’ kinds of tasks in regular classrooms? A very good question. I asked on twitter, and received a wide variety of responses.
Audiogrep is fun. Grab some audio, automatically transcribe it, then chop it apart and stitch it all back together again in amusing ways. I fed it my talk from the Interactivef Pasts Conference (video archive). My video: [singing over video of Caesar’s assassination] The more we get together, together, together; the more we get together, the happier we’ll be.
This past year, I’ve taught three digital history/humanities classes at Carleton. HIST3907o, Crafting Digital History, HIST5702w Digital History Methods as Public History Performance, and DIGH5000 Introduction to Digital Humanities. A fourth course was the open-access version of HIST3907o, but that is not counted in my tally, unfortunately. Finally, my MA student, Rob Blades, completed his combined Public History and Digital Humanities MA project.
I’m updating my teaching philosophy statement; periodically this is necessary for both cynical and genuine reasons. Cynical, because of paperwork requirements; genuine, because I think I actually believe this, when I see it written out. It helps me remember what the hell I’m trying to do in any given class, and reminds me that not everyone is going to buy into what I’m trying to do.
I fed a number of interviews with Donald Trump into a recurrent neural network. An RNN can learn to predict what letter ought to come next, depending on how well it has learned. The more data you can feed it, the more working memory your machine has, the better the eventual results. Once you’ve trained the model, you sample from the model and see what kind of text gets generated.
Terry Pratchett is mentioned for all of 2 seconds in the video; why youtube chooses that *one* still for this… anyway… ...
I’ve got between 12-15 minutes tomorrow, at Carleton’s 3rd data day. That’s not a lot of time. I’ve written out roughly what it is I want to talk about – but I go off-script a lot when I speak, so what’s below is only the most nebulous of guides to what’ll actually come out of my mouth tomorrow. In any event, the stuff below would take more like 25 – 30 minutes if I stuck to the script.
The prof looked around the room brightly (or at least, as brightly as one can on a monday morning in March). “So let’s talk about your final projects. Where are we at? What’s working, where can we trouble shoot?” Murmurs from the class. Some volunteered. “Going well, just have to meet later today to talk about it…” or “having trouble making variables work: has anyone run into…”. All good stuff. The last group.
I don’t know how to do this. I worry that whatever I did say, would only make it worse. How do you help? Your students never stop being your students. You work with them days on end, through periods of intense frustration on either side, through times of amazing energy and excitement, to joy (graduation!) I’ve been teaching one way or another since about 2003. Some of my students have gotten married, had kids, got great jobs.
March turned into a busy month. Silly me. Anyway, hard on the heals of last month’s address to Ottawa’s public high school history teachers (materials here; Rachel Collishaw’s write up of the day here) I’ll be next at: Colder Than Mars, a series out of the University of Regina. I’ll probably be doing something regarding sonification of historical data; March 4th ThatCamp North Country at St. Lawrence University, NY.