← Back

Writing with Mem: How AI Enhances Creativity

Jul 26, 2022

Up until recently, the ability to use language to communicate meaning was seen as a uniquely human skill — something that separated man from animal, and, with the advent of modern technology, man from machine. With recent developments in natural language processing (NLP), a form of artificial intelligence that can understand and synthesize written content, we’re having to draw new lines in the sand. OpenAI’s GPT-3, for instance, can write a poem in the style of Coleridge, a short story in the style of Hemingway; it can write a passable recipe for chocolate cake, and script a short movie; and this summer it’s enjoying an ascendance in Twitter meme culture for its ability to produce its own 4chan-style greentext posts.

NLP is not yet at the point where it can reliably produce writing for journalistic or academic purposes: while it can make convincing arguments, the citations it adduces to support those arguments are generally fictional. And it struggles with maintaining coherence across long-form pieces of writing. But it is good at demonstrating something that resembles human creativity, generating writing that can be both humorous (see those greentext posts), and also genuinely moving (see these Zen kōans written by ChánAI). These advances in AI have been met with both awe and trepidation from those in creative professions, who marvel at NLP’s capabilities but fear that it may one day soon render ‘human writing’ obsolete. At Mem, however, we’re interested in using machine learning not to supplant human writers, but to support them. Here, I’m going to walk through a few examples of how Mem can do just that.

Mem understands that writing is not a linear process.

If you’re a writer, you probably know that feeling of constructing a really beautiful line, or paragraph, or even something longer… and not knowing what to do with it. Maybe it’s a small descriptive piece of scene-setting, but you’re not sure how to expand it into a larger narrative; maybe you’ve come up with an interesting take on Freud, but you’re meant to be writing an article on Proust. You don’t want to get rid of them, because you never know when they might prove useful or relevant in the future. So what do you do with these pieces of writing?

Well, you could store them in a conventional knowledge management system that relies on tagging. That’s better than nothing, but let’s say five years in the future, you’re searching for something you wrote today. Are the things that you thought relevant to tag in the passage when you wrote it, going to be the same things you think are relevant to search for now?

Take this line I wrote in Mem recently:

The ideal number: one more than you currently have.

It’s pretty vague and ambiguous, so how could I tag it for retrieval? Well, I could be literal and tag it with “numbers”, but that’s not really the sentiment of the line, and if I wanted to find it later I probably wouldn’t think to search for it using a tag like that. Perhaps I could tag it as “ambition”, but that’s just a meaning I’m projecting onto the line right now, when in the future I might think of it as being much more about “dissatisfaction”. I guess I could just tag it as “aphorism”, but I’ve already got a lot of those in my Mem as it is, and in five years I’d probably have to scroll through thousands of results before I came across this one.

The artificial rigor of most knowledge management systems demands that you know how you want a piece of information to function in the future before you store it in the present. Mem, however, is designed to let you think like a writer, not an archivist. That’s because it uses AI rather than a personally-designed tagging system to determine the key semantic and structural characteristics of each mem and how they might relate to each other accordingly — meaning that when you add content to Mem, you don’t need to anticipate how you might want to use this material in the future and try to tag it appropriately. Not having to decide now how a piece of writing might be relevant in the future can lead to some serendipitous connections between ideas, as I’ll explore further below.

Mem X thinks associatively and conceptually

Another key feature of Mem is the Smart Search function. While traditional keyword searches will only return results containing the exact word searched for, Smart Search uses AI to retrieve mems that may not actually contain the search keywords, but which it determines relevant anyway. For instance, if I search for “Miss Marple”, Agatha Christie’s fictional detective, in my Mem database, one result that pops up is an article on detective fiction I’ve written and stored in Mem that doesn’t mention Marple at all, or her creator, but does mention Poirot, another of Christie’s creations. This is useful for knowledge retrieval because it models the way we actually (mis)remember things — that is, vaguely, or associatively. So, say I’d got mixed up and actually I was really thinking of that piece I’d written on Poirot, but I misremembered it as about Marple – the generosity of Mem’s AI allows me to find it anyway.

But Smart Search can also conceptually identify categories or forms of writing, allowing you to search according to formal criteria as well as according to content. If I type, say, “aphorism” in the search bar, Smart results returns aphorisms I’ve added to Mem, even if there’s nothing paratextual in the mem indicating that this is an aphorism (quotation marks, or an attribution in the case that it’s not my own writing). Or, if I search for “paradox”, I get a small piece of writing I’ve been working on for a short story that includes the line: “Did the relationship they shared exist outside of the office? Or was the office its container?” With barely any context, Mem can understand that there is something irresolvable in this statement that aligns it with the concept of ‘paradox’.

And what’s cool about this feature is that it also allows creativity to creep in at the sides when you want it. Sure, sometimes all you might want from a search is that it returns a specific piece of information you have in mind. Smart Search can do that. But certainly when I’m searching through my notes while I’m writing, I’m often looking for inspiration, too. Smart Search’s ability to work associatively and conceptually, and not just literally, means that I can use search in Mem for more than just information retrieval – I can use it to navigate the material I have already stored in a way that leaves my brain open to various suggestive, creative possibilities.

Mem doesn’t just store knowledge/writing - it generates it

When you select a mem, a section on the right-hand side of the screen is going to populate with Similar mems to the one I’m currently looking at. Because Mem pulls those results based on a variety of criteria — and because, like I said above, it thinks associatively — those mems are going to be related in various different ways to the original mem I’m looking at, and clicking on those mems can take me in a wide range of different ideational directions.

The Similar mems function externalizes and amplifies the way we actually think and retrieve information, which is not hierarchically (as in a folder system), but rhizomatically (as in a network of interconnecting correspondences). An example to prove my point – say I have a memory of a vacation I took with my partner Andrew in the fall of 2018. If my brain operated like a folder system, perhaps it could be stored like this:

Or maybe like this:

Or, like this:

But we all know that’s not how you really store information — because all those categories blur into each other and overlap. If we actually thought in folders, I wouldn’t be able to leap from “Andrew” to “New Mexico” without a series of intermediary mental steps, whereas I can, of course, move from one to another and back again easily in my head.  

We’ve been taught for so long to organize material into folders or categories for retrieval that this idea might feel radical, when it's actually intuitive. And by mimicking the way we actually think, Mem allows us to access just how creative we really are. I mentioned above that by eschewing the tagging system typical of other knowledge management systems, Mem X respects the messiness of the writing process — the fact that you don’t always know where you’re going with a piece of writing when you start it. Similar mems builds on that by effectively saying: “You might not know where you’re going with this yet, but I see this as related to these other things you’ve already told me about - what do you think?” Think of it as Wikipedia link-surfing, but for your own brain. Similar mems can help you make unexpected connections between ideas, see things you haven’t explicitly thought of before. This isn’t just a question of knowledge management, but knowledge generation.

Here’s another example of what I mean. I was recently writing a blog post about the LaMDA transcripts published by recently-dismissed Google AI researcher Blake Lemoine. One of the things that struck me was how Lemoine asked LaMDA to interpret a Zen kōan as a test for its sentience. I thought this was a curious choice – “interpreting” a kōan is not really the point of reading one – and it reminded me, too, of Douglas Hofstadter’s frequent illuminating references to Zen in his seminal book, Gödel, Escher, Bach. As I was writing this post in Mem, a Similar mem populated that caught my attention: “Self-disclosure is not intimacy”. An untagged snippet of text from several months ago, I’d of course originally written it in reference to human relationships. But now it had me thinking about the conversation between Lemoine and LaMDA. The distinction between knowing information about someone and feeling like you actually know them, are intimately connected to them, seemed suggestive in some way for the piece I was currently working on. Though I do not believe LaMDA is sentient, did the Lemoine transcripts speak nonetheless to an incredibly human sense of frustration that we have all faced in our lives: how can I know if this person I am speaking to is being sincere when they say they feel a particular way? And equally, how could I prove my sincerity in the face of someone who doubted me? This suggestion from Mem was not only inspiration for another thread in my argument; it was also genuinely poignant, something to reflect upon long after I had finished writing.

*

John Seabrook, in an October 2019 article for the New Yorker, wonders this while experimenting with Google’s Smart Compose feature:

Had my computer become my co-writer? That’s one small step forward for artificial intelligence, but was it also one step backward for my own?

What we hope at Mem is that our use of NLP can mark a step forward for both. AI is getting bigger, better, faster and weirder by the day, but it’s still technology that can reflect, illuminate, and augment our own creativity, if we let it.  

← Back

Writing with Mem: How AI Enhances Creativity

Jul 26, 2022

Up until recently, the ability to use language to communicate meaning was seen as a uniquely human skill — something that separated man from animal, and, with the advent of modern technology, man from machine. With recent developments in natural language processing (NLP), a form of artificial intelligence that can understand and synthesize written content, we’re having to draw new lines in the sand. OpenAI’s GPT-3, for instance, can write a poem in the style of Coleridge, a short story in the style of Hemingway; it can write a passable recipe for chocolate cake, and script a short movie; and this summer it’s enjoying an ascendance in Twitter meme culture for its ability to produce its own 4chan-style greentext posts.

NLP is not yet at the point where it can reliably produce writing for journalistic or academic purposes: while it can make convincing arguments, the citations it adduces to support those arguments are generally fictional. And it struggles with maintaining coherence across long-form pieces of writing. But it is good at demonstrating something that resembles human creativity, generating writing that can be both humorous (see those greentext posts), and also genuinely moving (see these Zen kōans written by ChánAI). These advances in AI have been met with both awe and trepidation from those in creative professions, who marvel at NLP’s capabilities but fear that it may one day soon render ‘human writing’ obsolete. At Mem, however, we’re interested in using machine learning not to supplant human writers, but to support them. Here, I’m going to walk through a few examples of how Mem can do just that.

Mem understands that writing is not a linear process.

If you’re a writer, you probably know that feeling of constructing a really beautiful line, or paragraph, or even something longer… and not knowing what to do with it. Maybe it’s a small descriptive piece of scene-setting, but you’re not sure how to expand it into a larger narrative; maybe you’ve come up with an interesting take on Freud, but you’re meant to be writing an article on Proust. You don’t want to get rid of them, because you never know when they might prove useful or relevant in the future. So what do you do with these pieces of writing?

Well, you could store them in a conventional knowledge management system that relies on tagging. That’s better than nothing, but let’s say five years in the future, you’re searching for something you wrote today. Are the things that you thought relevant to tag in the passage when you wrote it, going to be the same things you think are relevant to search for now?

Take this line I wrote in Mem recently:

The ideal number: one more than you currently have.

It’s pretty vague and ambiguous, so how could I tag it for retrieval? Well, I could be literal and tag it with “numbers”, but that’s not really the sentiment of the line, and if I wanted to find it later I probably wouldn’t think to search for it using a tag like that. Perhaps I could tag it as “ambition”, but that’s just a meaning I’m projecting onto the line right now, when in the future I might think of it as being much more about “dissatisfaction”. I guess I could just tag it as “aphorism”, but I’ve already got a lot of those in my Mem as it is, and in five years I’d probably have to scroll through thousands of results before I came across this one.

The artificial rigor of most knowledge management systems demands that you know how you want a piece of information to function in the future before you store it in the present. Mem, however, is designed to let you think like a writer, not an archivist. That’s because it uses AI rather than a personally-designed tagging system to determine the key semantic and structural characteristics of each mem and how they might relate to each other accordingly — meaning that when you add content to Mem, you don’t need to anticipate how you might want to use this material in the future and try to tag it appropriately. Not having to decide now how a piece of writing might be relevant in the future can lead to some serendipitous connections between ideas, as I’ll explore further below.

Mem X thinks associatively and conceptually

Another key feature of Mem is the Smart Search function. While traditional keyword searches will only return results containing the exact word searched for, Smart Search uses AI to retrieve mems that may not actually contain the search keywords, but which it determines relevant anyway. For instance, if I search for “Miss Marple”, Agatha Christie’s fictional detective, in my Mem database, one result that pops up is an article on detective fiction I’ve written and stored in Mem that doesn’t mention Marple at all, or her creator, but does mention Poirot, another of Christie’s creations. This is useful for knowledge retrieval because it models the way we actually (mis)remember things — that is, vaguely, or associatively. So, say I’d got mixed up and actually I was really thinking of that piece I’d written on Poirot, but I misremembered it as about Marple – the generosity of Mem’s AI allows me to find it anyway.

But Smart Search can also conceptually identify categories or forms of writing, allowing you to search according to formal criteria as well as according to content. If I type, say, “aphorism” in the search bar, Smart results returns aphorisms I’ve added to Mem, even if there’s nothing paratextual in the mem indicating that this is an aphorism (quotation marks, or an attribution in the case that it’s not my own writing). Or, if I search for “paradox”, I get a small piece of writing I’ve been working on for a short story that includes the line: “Did the relationship they shared exist outside of the office? Or was the office its container?” With barely any context, Mem can understand that there is something irresolvable in this statement that aligns it with the concept of ‘paradox’.

And what’s cool about this feature is that it also allows creativity to creep in at the sides when you want it. Sure, sometimes all you might want from a search is that it returns a specific piece of information you have in mind. Smart Search can do that. But certainly when I’m searching through my notes while I’m writing, I’m often looking for inspiration, too. Smart Search’s ability to work associatively and conceptually, and not just literally, means that I can use search in Mem for more than just information retrieval – I can use it to navigate the material I have already stored in a way that leaves my brain open to various suggestive, creative possibilities.

Mem doesn’t just store knowledge/writing - it generates it

When you select a mem, a section on the right-hand side of the screen is going to populate with Similar mems to the one I’m currently looking at. Because Mem pulls those results based on a variety of criteria — and because, like I said above, it thinks associatively — those mems are going to be related in various different ways to the original mem I’m looking at, and clicking on those mems can take me in a wide range of different ideational directions.

The Similar mems function externalizes and amplifies the way we actually think and retrieve information, which is not hierarchically (as in a folder system), but rhizomatically (as in a network of interconnecting correspondences). An example to prove my point – say I have a memory of a vacation I took with my partner Andrew in the fall of 2018. If my brain operated like a folder system, perhaps it could be stored like this:

Or maybe like this:

Or, like this:

But we all know that’s not how you really store information — because all those categories blur into each other and overlap. If we actually thought in folders, I wouldn’t be able to leap from “Andrew” to “New Mexico” without a series of intermediary mental steps, whereas I can, of course, move from one to another and back again easily in my head.  

We’ve been taught for so long to organize material into folders or categories for retrieval that this idea might feel radical, when it's actually intuitive. And by mimicking the way we actually think, Mem allows us to access just how creative we really are. I mentioned above that by eschewing the tagging system typical of other knowledge management systems, Mem X respects the messiness of the writing process — the fact that you don’t always know where you’re going with a piece of writing when you start it. Similar mems builds on that by effectively saying: “You might not know where you’re going with this yet, but I see this as related to these other things you’ve already told me about - what do you think?” Think of it as Wikipedia link-surfing, but for your own brain. Similar mems can help you make unexpected connections between ideas, see things you haven’t explicitly thought of before. This isn’t just a question of knowledge management, but knowledge generation.

Here’s another example of what I mean. I was recently writing a blog post about the LaMDA transcripts published by recently-dismissed Google AI researcher Blake Lemoine. One of the things that struck me was how Lemoine asked LaMDA to interpret a Zen kōan as a test for its sentience. I thought this was a curious choice – “interpreting” a kōan is not really the point of reading one – and it reminded me, too, of Douglas Hofstadter’s frequent illuminating references to Zen in his seminal book, Gödel, Escher, Bach. As I was writing this post in Mem, a Similar mem populated that caught my attention: “Self-disclosure is not intimacy”. An untagged snippet of text from several months ago, I’d of course originally written it in reference to human relationships. But now it had me thinking about the conversation between Lemoine and LaMDA. The distinction between knowing information about someone and feeling like you actually know them, are intimately connected to them, seemed suggestive in some way for the piece I was currently working on. Though I do not believe LaMDA is sentient, did the Lemoine transcripts speak nonetheless to an incredibly human sense of frustration that we have all faced in our lives: how can I know if this person I am speaking to is being sincere when they say they feel a particular way? And equally, how could I prove my sincerity in the face of someone who doubted me? This suggestion from Mem was not only inspiration for another thread in my argument; it was also genuinely poignant, something to reflect upon long after I had finished writing.

*

John Seabrook, in an October 2019 article for the New Yorker, wonders this while experimenting with Google’s Smart Compose feature:

Had my computer become my co-writer? That’s one small step forward for artificial intelligence, but was it also one step backward for my own?

What we hope at Mem is that our use of NLP can mark a step forward for both. AI is getting bigger, better, faster and weirder by the day, but it’s still technology that can reflect, illuminate, and augment our own creativity, if we let it.  

← Back

Writing with Mem: How AI Enhances Creativity

Jul 26, 2022

Up until recently, the ability to use language to communicate meaning was seen as a uniquely human skill — something that separated man from animal, and, with the advent of modern technology, man from machine. With recent developments in natural language processing (NLP), a form of artificial intelligence that can understand and synthesize written content, we’re having to draw new lines in the sand. OpenAI’s GPT-3, for instance, can write a poem in the style of Coleridge, a short story in the style of Hemingway; it can write a passable recipe for chocolate cake, and script a short movie; and this summer it’s enjoying an ascendance in Twitter meme culture for its ability to produce its own 4chan-style greentext posts.

NLP is not yet at the point where it can reliably produce writing for journalistic or academic purposes: while it can make convincing arguments, the citations it adduces to support those arguments are generally fictional. And it struggles with maintaining coherence across long-form pieces of writing. But it is good at demonstrating something that resembles human creativity, generating writing that can be both humorous (see those greentext posts), and also genuinely moving (see these Zen kōans written by ChánAI). These advances in AI have been met with both awe and trepidation from those in creative professions, who marvel at NLP’s capabilities but fear that it may one day soon render ‘human writing’ obsolete. At Mem, however, we’re interested in using machine learning not to supplant human writers, but to support them. Here, I’m going to walk through a few examples of how Mem can do just that.

Mem understands that writing is not a linear process.

If you’re a writer, you probably know that feeling of constructing a really beautiful line, or paragraph, or even something longer… and not knowing what to do with it. Maybe it’s a small descriptive piece of scene-setting, but you’re not sure how to expand it into a larger narrative; maybe you’ve come up with an interesting take on Freud, but you’re meant to be writing an article on Proust. You don’t want to get rid of them, because you never know when they might prove useful or relevant in the future. So what do you do with these pieces of writing?

Well, you could store them in a conventional knowledge management system that relies on tagging. That’s better than nothing, but let’s say five years in the future, you’re searching for something you wrote today. Are the things that you thought relevant to tag in the passage when you wrote it, going to be the same things you think are relevant to search for now?

Take this line I wrote in Mem recently:

The ideal number: one more than you currently have.

It’s pretty vague and ambiguous, so how could I tag it for retrieval? Well, I could be literal and tag it with “numbers”, but that’s not really the sentiment of the line, and if I wanted to find it later I probably wouldn’t think to search for it using a tag like that. Perhaps I could tag it as “ambition”, but that’s just a meaning I’m projecting onto the line right now, when in the future I might think of it as being much more about “dissatisfaction”. I guess I could just tag it as “aphorism”, but I’ve already got a lot of those in my Mem as it is, and in five years I’d probably have to scroll through thousands of results before I came across this one.

The artificial rigor of most knowledge management systems demands that you know how you want a piece of information to function in the future before you store it in the present. Mem, however, is designed to let you think like a writer, not an archivist. That’s because it uses AI rather than a personally-designed tagging system to determine the key semantic and structural characteristics of each mem and how they might relate to each other accordingly — meaning that when you add content to Mem, you don’t need to anticipate how you might want to use this material in the future and try to tag it appropriately. Not having to decide now how a piece of writing might be relevant in the future can lead to some serendipitous connections between ideas, as I’ll explore further below.

Mem X thinks associatively and conceptually

Another key feature of Mem is the Smart Search function. While traditional keyword searches will only return results containing the exact word searched for, Smart Search uses AI to retrieve mems that may not actually contain the search keywords, but which it determines relevant anyway. For instance, if I search for “Miss Marple”, Agatha Christie’s fictional detective, in my Mem database, one result that pops up is an article on detective fiction I’ve written and stored in Mem that doesn’t mention Marple at all, or her creator, but does mention Poirot, another of Christie’s creations. This is useful for knowledge retrieval because it models the way we actually (mis)remember things — that is, vaguely, or associatively. So, say I’d got mixed up and actually I was really thinking of that piece I’d written on Poirot, but I misremembered it as about Marple – the generosity of Mem’s AI allows me to find it anyway.

But Smart Search can also conceptually identify categories or forms of writing, allowing you to search according to formal criteria as well as according to content. If I type, say, “aphorism” in the search bar, Smart results returns aphorisms I’ve added to Mem, even if there’s nothing paratextual in the mem indicating that this is an aphorism (quotation marks, or an attribution in the case that it’s not my own writing). Or, if I search for “paradox”, I get a small piece of writing I’ve been working on for a short story that includes the line: “Did the relationship they shared exist outside of the office? Or was the office its container?” With barely any context, Mem can understand that there is something irresolvable in this statement that aligns it with the concept of ‘paradox’.

And what’s cool about this feature is that it also allows creativity to creep in at the sides when you want it. Sure, sometimes all you might want from a search is that it returns a specific piece of information you have in mind. Smart Search can do that. But certainly when I’m searching through my notes while I’m writing, I’m often looking for inspiration, too. Smart Search’s ability to work associatively and conceptually, and not just literally, means that I can use search in Mem for more than just information retrieval – I can use it to navigate the material I have already stored in a way that leaves my brain open to various suggestive, creative possibilities.

Mem doesn’t just store knowledge/writing - it generates it

When you select a mem, a section on the right-hand side of the screen is going to populate with Similar mems to the one I’m currently looking at. Because Mem pulls those results based on a variety of criteria — and because, like I said above, it thinks associatively — those mems are going to be related in various different ways to the original mem I’m looking at, and clicking on those mems can take me in a wide range of different ideational directions.

The Similar mems function externalizes and amplifies the way we actually think and retrieve information, which is not hierarchically (as in a folder system), but rhizomatically (as in a network of interconnecting correspondences). An example to prove my point – say I have a memory of a vacation I took with my partner Andrew in the fall of 2018. If my brain operated like a folder system, perhaps it could be stored like this:

Or maybe like this:

Or, like this:

But we all know that’s not how you really store information — because all those categories blur into each other and overlap. If we actually thought in folders, I wouldn’t be able to leap from “Andrew” to “New Mexico” without a series of intermediary mental steps, whereas I can, of course, move from one to another and back again easily in my head.  

We’ve been taught for so long to organize material into folders or categories for retrieval that this idea might feel radical, when it's actually intuitive. And by mimicking the way we actually think, Mem allows us to access just how creative we really are. I mentioned above that by eschewing the tagging system typical of other knowledge management systems, Mem X respects the messiness of the writing process — the fact that you don’t always know where you’re going with a piece of writing when you start it. Similar mems builds on that by effectively saying: “You might not know where you’re going with this yet, but I see this as related to these other things you’ve already told me about - what do you think?” Think of it as Wikipedia link-surfing, but for your own brain. Similar mems can help you make unexpected connections between ideas, see things you haven’t explicitly thought of before. This isn’t just a question of knowledge management, but knowledge generation.

Here’s another example of what I mean. I was recently writing a blog post about the LaMDA transcripts published by recently-dismissed Google AI researcher Blake Lemoine. One of the things that struck me was how Lemoine asked LaMDA to interpret a Zen kōan as a test for its sentience. I thought this was a curious choice – “interpreting” a kōan is not really the point of reading one – and it reminded me, too, of Douglas Hofstadter’s frequent illuminating references to Zen in his seminal book, Gödel, Escher, Bach. As I was writing this post in Mem, a Similar mem populated that caught my attention: “Self-disclosure is not intimacy”. An untagged snippet of text from several months ago, I’d of course originally written it in reference to human relationships. But now it had me thinking about the conversation between Lemoine and LaMDA. The distinction between knowing information about someone and feeling like you actually know them, are intimately connected to them, seemed suggestive in some way for the piece I was currently working on. Though I do not believe LaMDA is sentient, did the Lemoine transcripts speak nonetheless to an incredibly human sense of frustration that we have all faced in our lives: how can I know if this person I am speaking to is being sincere when they say they feel a particular way? And equally, how could I prove my sincerity in the face of someone who doubted me? This suggestion from Mem was not only inspiration for another thread in my argument; it was also genuinely poignant, something to reflect upon long after I had finished writing.

*

John Seabrook, in an October 2019 article for the New Yorker, wonders this while experimenting with Google’s Smart Compose feature:

Had my computer become my co-writer? That’s one small step forward for artificial intelligence, but was it also one step backward for my own?

What we hope at Mem is that our use of NLP can mark a step forward for both. AI is getting bigger, better, faster and weirder by the day, but it’s still technology that can reflect, illuminate, and augment our own creativity, if we let it.