guns on this morgue ship (archived post)

by benjamin hollon on march 4, 2021

Ignore the cryptic title for the moment; I’ll get into that soon enough, don’t worry. First, though, I’m going to give you some background.

This week I first got into Machine Learning (or ML). ML is an area of Computer Science in which computers teach themselves how to solve real-world problems based on lots of data.

Think of it like this: most coding gives a computer inputs and rules for what to do with them. The computer then executes the instructions on the inputs and provides the user with an output.

In Machine Learning, on the other hand, the user does not yet know the rules. For example, we don’t know an exact definition of what makes a phrase sarcasm, so we can’t give computer rules to detect it using conventional programming. We can, however, give the computer examples with and without sarcasm and ask it to come up with its own rules. This process is Machine Learning in a nutshell.

This project was, in fact, my first attempt at Machine Learning. I handed my script a dataset of over 26,000 article titles labeled with whether they contain sarcasm and set it to work trying to identify patterns; specifically, it was looking for which words were most common in sarcastic phrases.

Now, this process isn’t perfect; it can only be as accurate as the data I put in, and I certainly didn’t give it data for every word in the dictionary, so it had to guess with some words. In the end, though, it tested out with 83% accuracy, so I called it a win.

Side note: Once I had a trained model, I tried plugging in the phrase “Benjamin is awesome at Machine Learning” to see what it would spit back at me. According to it, that sentence has a 56% probability of being sarcastic. Take that as you will; it could be an inaccurate or inadequately trained model, or my code may be making snide comments about my coding ability. I’d still count this project as a success in the latter case since I’d have taught a computer how sarcasm works. ;)

Hopefully, you’ve got a picture of what ML is and how it works now. You don’t need to understand the underlying concepts thoroughly, but a general idea of what I’m working on will help you as I dive into this article’s topic.

Now, on to my next project. One of my biggest dreams in coding has been to create a program capable of parsing and responding to writing. In the Machine Learning world, this is known as Natural Language Processing (or NLP).

I figured I might as well take a stab at it, so I came up with a basic project: an autocomplete function. You know how on a phone keyboard it suggests what word you might want to type next? That’s what I was aiming for.

I won’t go into the details of how that worked. In summary, the program parsed input text I gave it, looked at relationships between words and how often they appeared in specific patterns, and, based on that, would attempt to predict the next word given a sequence.

But I needed to decide what input to give it. The more text I gave it, the better an algorithm it would be, but the longer it would take to build the model. I wanted something I could run casually.

So I decided to give it six public domain short stories by Ray Bradbury[1]. I mean, why not? It seems like a cool idea to see how one of my favorite authors would finish my sentences. Developed far enough, it could even be a useful tool in my own writing.

So I built it and ran the code. It took about 10 minutes to process all the input text, which seems slow for a computer but is incredibly fast considering that there were over 5,000 unique words and something like 39,000 in total.

Once I knew my model compiled without errors, I needed to test it. I thought up something random off the top of my head: “My name is”. I told the script to predict the next nine words.

The output? “My name is knowing that we’ve got guns on this morgue ship.” (Hence, the title of this post. Told you I’d get to it.)

Hilarious, right? At least I thought so, too, at first. I looked closer into the stories I gave it and was able to see its reasoning, and it had some excellent rationale behind its decisions. For example, one of the stories was named “Morgue Ship,” and at one point, a character asks if they have guns.

So, as ridiculous as it sounds, the computer did what it was supposed to. Then why does it seem so absurd to us?

Here’s my answer: language is hard. It’s miraculously complex in a way that we can’t even comprehend in conscious thought. My script only processed these words in terms of numbers; it had no concept of a “meaning” behind them. To a computer, meaning does not exist. It only performs the task it is given and doesn’t see any reason behind it.

But the very nature of language implies meaning. Every phrase in an ordinary conversation is packed full of memetic complexity at a level that even the best computers we have can’t ever comprehend, no matter how large a sample size we give them.

And yet, somehow, this skill comes naturally to us. In fact, it’s so natural that children learn to speak on their own, the very process that Machine Learning is modeled after.

So I took these thoughts and applied them to another concept I’ve seen a lot of lately: mind uploading, or the idea that we could take all of our memories and brain processes and copy them onto computers and live immortal lives there.

It sounds all fine and good, but when you take a look at how my ML model was able to do everything right but still not get the point, you start to wonder how far that extends when it comes to computers. There are better NLP models than the one I made; I only spent a day while researchers have spent thousands of combined hours working on this puzzle. They’ve created some solutions (see GPT-2) that come up with mind-boggling articles almost indistinguishable from what we could write.

But at the core, if you strip off all the extra paint and extra development time, these models aren’t very different from my own. They don’t get the point either. They’re just better at pretending they do.

And so I apply this to mind upload: even if we could copy human thoughts so perfectly that a computer could simulate our minds, would we ever trust those simulations as much as the real people?

Think about it: there is absolutely no way that one can prove consciousness in a machine. All of our computer science knowledge suggests that, at best, we can only create a really good imposter. I would never trust a simulation of me to make the right decisions because I know that, at heart, the computer does not understand what it all means.

Here’s a thought experiment to close off this week’s article: What happens if a mind-uploaded version of you tries to murder someone? Does that mean that, because it’s a supposedly-perfect representation of you, you are a high-risk person to be around? Should the police pre-emptively arrest you because your simulation committed a crime?

If you say yes, why? Does this simulation really have any authority over who you are? Can it be considered a thinking, feeling human being even though a computer’s CPU runs it?

And if you say no, why? Whether or not it’s a perfect representation of you, isn’t this simulation the best guess we have? Shouldn’t we go with the safest option and arrest you anyway to avoid negative consequences if it is an accurate representation of yourself?

And for proponents of both sides, I have a question: how can we determine whether a machine is intelligent? Would you ever trust a computer-simulated replacement of yourself?

I’d be grateful if you commented below with your thoughts. This topic genuinely interests me, and I’d like to hear other people’s opinions on it.

Note: the code for the comment system is brand new, so let me know if it’s not working.


Liked what you read?

I'm really glad you did! What's next?