A Rosuflem by Any Other Name

NN Roll Call.png

Hmm, let’s see…Jasqquosius? Epheny?? Siigphry?! Can’t we just get a good old-fashioned Kaytlynne or Jaysonn?

I was delighted the other day by a few different articles about an artificial neural network that had been given the task of naming a bunch of guinea pigs up for adoption at the Portland Guinea Pig Rescue. Some of the names are pretty appropriate guinea-piggy names like Fuzzable, Nuzzy, and Fabsy, while others were a little…strange. A few of my favorites are Spockers, Trickles, and Boooy. (Do yourself a favor and see more of them here!)

To understand why anyone would ask a neural network to do such a thing and how they’d end up with these results, we need to have a basic idea of what a neural network is. Actually, you may not know this, but you already have a neural network right there in your head. In fact, it’s been there since you were a baby! Creeeepy.

No need to worry—your own neural network is just the connected neurons of your brain. This has inspired artificial networks, which are basically computing systems that try to mimic the way we think humans learn things—not from being given specific programming with defined rules, but from filling in the gaps using the information we’re given. With a database of information, a neural network can find patterns and draw conclusions about the “rules” that govern that collection of information.

For example, as a kid, there was probably no single time when you were told in exhaustive detail every possible rule of good manners at once (who knows, though, everyone’s family is weird). More likely, you got some lectures that gave you a few rules at a time, but mostly you pieced it together from your parents’ and other adults’ reactions to things you did or saw others do. “Aha!” you thought. “Taking someone else’s toy is rude, but so is hitting them when they take my toy!” It’s a long and sometimes confusing process, but it’s part of true learning, rather than having a prefabricated set of instructions plopped into your brain.

Computer neural networks work in much the same way. Given, for instance, a whole set of guinea pig names, the network will analyze them to find the most common features—number of letters and words, letter combinations, that sort of thing. It tries to figure out what principles unite the data set and generate new content that follows those principles.

This means that sometimes a neural network’s results will be exactly in line with what you expect, and sometimes they’ll be hilariously off the wall. It’s fascinating to see how the “rules” the network comes up with compare to our own expectations.

Naming guinea pigs is far from the only entertaining thing neural networks have been trained to do over the last few years. Dr. Janelle Shane, the brilliant soul responsible for the guinea pigs, has made a hobby of training neural networks on various data sets. (You can change settings for a network, including what’s referred to as the “temperature” or the divergence from exact patterns in the training database; Shane plays around with this to get more or less imaginative outputs.) Here’s a selection of my favorites from some of her fantastic naming tasks:

Paint colors: Ghasty Pink, Stoner Blue, Bank Butt, Catbabel, Dorkwood

Good, aren’t they? There’s more!

Metal band names: Vermit (Thrash Metal/Crossover/Deathcore, United States), Sespessstion Sanicilevus (Melodic Death Metal, United States), Black Clonic Sky (Black Metal, Greece), Inbumblious (Doom/Gothic Metal, Germany), Dragonsulla and Steelgosh (Heavy Metal, Tuera)

D&D spells: Hold Mouse, Finger of Enftebtemang, Purping Lightsin, True Steake, Mind Blark

NN True Steake.png

Truly, a perfectly executed True Steake is most rare. Well done! (I admit, I took the easy one. I’m drawing a total blark on how to illustrate some of these others.)

Doctor Who episodes: “The Keds of Death”, “The Unicorn and the Daleks”, “The Awkroids of Tara”, “The Wheeen Death”, “Planet of Fire in Space”

Irish songs: “Tin the Connand the Wallop”, “Sloom of Youth”, “Lard of the Land”, “Seat of Slugs”, “Thing Mop the Bog”  [Go here for a similar experiment that involved writing the music as well!]

Pokemon: Tortabool (Ability: Healy Stream), Staroptor (Ability: Stench, Hidden Ability: Stick Hat)  [These ones are illustrated!]

Proverbs: “Death when it comes will have no sheep”, “A good wine makes the best sermon”

Of course, plenty of other people work with neural networks, and some of them even play with them like Shane does. Andrej Karpathy provided us with the lovely baby names I’ve used in my opening illustration—and post title, since I haven’t found a neural network-generated list of flowers yet. You can see the whole list here, but Jean-Xelly, Grederio, Katharinus, and Ostank are a few of my favorites that didn’t make it into the cartoon caption.

Karpathy discusses his recurrent neural network at length, if you want a more in-depth look at how it works. He also includes samples of neural network output for Shakespeare, Wikipedia, and algebraic geometry.

These word games are a great demonstration of what neural networks are capable of, but there are plenty of other applications. Google researchers have been training networks with databases of sketches instead of names, and the results as explained in Google Brain Resident David Ha’s blog post are extremely cool.

After training with a database of sketches of a certain object, like cats, the network was fed new sketches and asked to reconstruct them, still working under the “cat” model it had developed from the database. It turns out the network can correct deviations from the pattern like a three-eyed or five-legged cat, which is a great result.

It get better, though, because it’s not just about comparing cats to cats. For a network trained with the “cat” database, any drawing will be catified. Give it a toothbrush, and you get a very whiskery cat. Give it a chair, and it figures out how to reinterpret the chair back as a long tail.

Cat Chair.png

Note to self: claw foot furniture sheds a lot more than I envisioned.

Most of the example sketches in Ha’s post are from networks trained for cats and pigs, but there’s an interesting one that uses both and can generate sketches that interpolate or fill in the missing steps between a cat sketch and a pig sketch, sort of an Animorphs effect. There are also interpolations between two forms of the same object, like a cat sketch with only a head as opposed to the whole animal, and very different objects, like the cat-chair combo. For the rest of their results, check out that blog post!

If you’re interested in seeing neural networks in action, you’re in luck. As part of Google’s attempt to get the largest doodle data set ever, you can test Google’s neural network yourself by doodling for it and seeing how long it takes to recognize the object! This is really fun and based on my own *ahem* rigorous research, I recommend doing it several times. The first time, draw everything as if you’re on the same team and you want it to be able to guess. (I’ve always gotten 5 or 6/6 doing this, so it’s really pretty good.)

The next time, try to trip it up! Think of your first instinct for drawing a key, or a coffee mug—which way it’s facing, which parts you draw first, etc. Then do something different, like putting the handle of the mug facing mostly towards you instead of all the way off to the side. I drew “key” with the teeth facing down and to the left, and even though it was a perfectly good key, the neural network couldn’t figure it out. Similarly, “giraffe” with its head down grazing is unrecognizable.

Another thing to test is what features it uses to recognize objects. For example, I started “ocean” with a little squiggly water line at the top of the screen, and it guessed it just from that. For “violin,” I drew it with the tuning pegs down and on the left, strings and bridge and F-holes and all, and the network didn’t guess right until I drew a bow next to it at the last second.

The general conclusion seems to be that neural networks have enormous potential for machine learning, but they can also be the source of much hilarity, depending on what you’re looking for. Karpathy’s blog post, which I also linked to above, gives his explanation of recurrent neural networks, or you can get a less specialized description from Wikipedia that explains some of the serious research applications like speech recognition and medical diagnosis. Whatever you do, don’t miss Janelle Shane’s growing collection of neural network name lists!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s