NFT revolutionized how auction is being done nowadays, which brings the action to the internet realm. But Fetch.ai takes it up a notch by putting up artwork created with the help of machine learning. Joining Jeff Kelley, Eathan Janney, and Josh Kriger is the company’s Senior Engineer, Emma Smith. She talks about (and presents a short demo) using artificial intelligence to develop the most amazing digital art. Emma explains how this innovative technology works through random sources, different artist patterns, and the user’s sheer creative skills. She also dives into the potential use of machine learning for other purposes, from healthcare, transportation, to smart contracts.
Listen to the podcast here:
Fetch.ai With Emma Smith
This episode features Emma Smith who leads the Collective Learning team at Fetch.ai, which is building an open-access, tokenized and decentralized machine learning network to enable smart infrastructure built around a decentralized digital economy. Collective learning is the framework upon which the NFT platform by Fetch.ai was built. Emma has an MSc in Physics from the University of Cambridge and is an experienced software engineer with a demonstrated history of working in the research and crypto industry. She is adept at coding using Python and C++, and is skilled in data science and machine learning. Emma, it’s great to have you here. That’s an impressive background. It sounds like it’s going to be a riveting conversation.
Thanks, Eathan. I’m excited to be here as well. We’ve had a great engagement with our CoLearn pAInt platform so far. I’m excited to be able to share it with you all.
We’re excited, Emma. Our show is all about the convergence of technology and culture. What you’re doing at Fetch with this project that you’re leading sounds amazing. It says a lot about you. They gave you this massive responsibility to break new ground.
I hope I explained it well. It’s exciting. That interface between people and technology is at the core of what’s the CoLearn pAInt is about. The central part of it is that we’ve got a machine learning model that produces artwork. We’ve also got input from people who shaped the artwork in the way they want to go. The end result is these artworks that were meant to use NFT are both AI and human beings.
To take a step back, Fetch is a leader in the industry. Maybe not everyone knows the origin story. How did this idea come together for Fetch? How did you get involved?
Fetch does a lot of things around the decentralized economy. One of them is collective learning. There’s all to this machine learning stuff and sometimes it gets called AI. It’s doing incredibly cool things. You can get models that detect cancer from slides, drive cars, or detect fraud. We used to think that all of these things were something only a human could do like detect faces in a picture. Now, we’ve got machine learning models that can do all of these. That’s exciting.
There are a lot of drawbacks still, things that aren’t great. These models tend to be proprietary. They’re owned by one big company. The data that you need to train them, either you need to be like Facebook or Google, those people with a huge lake of data to train this stuff on. There’s the data that you need but it can’t be shared. It’s got to be kept private. Ordinary people can’t benefit from these models. They’re often trained on their data. That’s where the idea of collective learning came in.
Using tools from the decentralized economy like smart contracts, we can enable people to benefit from their data, collaborate, train these algorithms, and then get the benefits from them at the end. That’s a little bit abstract. The idea between CoLearn pAInt is, let’s do this in a way that everybody can see and understand. People can see artworks. They understand what they like and what they don’t like. On the platform, there’s a model at the center that’s producing these artworks, and then there are people controlling it, shaping it, and getting a benefit from the end because they get the proceeds from the NFTs.
How did you get involved? Were you drafted from the outside or were you already working at Fetch? Did you suggest the project or did you join the team?
My background is in machine learning. I worked in a research group for a while. I’m always interested in what’s the next cool thing you can do with the machine learning model. I joined Fetch because I thought that the combination of machine learning and a decentralized economy had a huge number of things you could do with it.
This project was inspired by lots of these algorithmic art ones. Something that’s cool in NFTs is an art that’s made by an algorithm. It hasn’t been seen before like Art Blocks. Somebody must have seen CryptoPunks and had gone, “Can we build one of those?” There’s a huge amount of creativity in this whole space. We thought, “Let’s combine that with the collective learning that we’re already building.”
It makes sense. Everything is building on top of everything, and getting mashed up in interesting ways. What Art Blocks is doing is cool. What you guys are doing establishes your own point of view on what the future can bring in terms of converging technology.
I hope so.
It’s fun to hear about these CoLearn pAInt projects. I’ve got a creative background. I play the piano, I like to draw and things like that. Our show art is something that I created. I’m curious how this came together. Was it just one day this idea fell in your lap to do a generative art type project or a collaborative art type project? How did this originate? How did it develop?
There have been some interesting papers in the field about art generation. There was a big breakthrough in architecture that someone designed for a certain model. This produces art that is way cooler looking than all the previous art. I’m inspired by that. That was a key thing. It’s interesting that you mentioned music because I still haven’t seen the same thing for music. It’s very hard to get a machine to write music. You listen to it and you think, “This a little bit makes sense.” You then listen to the thing as a whole and it has no structure to it. It never comes back to the original chords. It wanders off in some weird direction.
It’s one of those things that Elon Musk mentions a lot when he’s talking about Neuralink in his approach to handling AI. It’s the creativity part of it and trying to reflect what humans do from a creative perspective. It’s been difficult. It seems like on the roadmap for AI, that’s a pretty big challenge versus all the left brain-type stuff. Do you agree on that point? It sounds like it is a big challenge as you mentioned already. Do you think that’s something that can be overcome with AI machine learning?
The field keeps coming on leaps and bounds. Each time I see a new model, I’m like, “I can make better art,” and that type of thing. It doesn’t fail in the same way a human would do, which often makes it a bit difficult to understand. If you told a human to draw, maybe they would have faces that looked a bit weird and lumpy, whilst your machine learning model most of the time makes faces. Occasionally, it makes some completely random mess and you’ve got no idea why. That’s a bit of a roadblock. In something like generating art. It’s fine because you can take the cool-looking ones. In something like self-driving cars, I’m sure 2021 was meant to have self-driving cars in it. Getting the model so that they don’t fail in weird ways. How we can understand why they’re failing is still a big problem in the field.
It’s a great concept, the collaboration with human intelligence and artificial intelligence to create art and music. I’m pretty impressed with my simple GarageBand software on my Macintosh. It’s got these “drummers.” You can give them different tempos to play. You can decide which drums they’re going to use in a specific segment, style or whatever. You can speed up or slow down the tempo, change the mood, and change the intensity. It does a pretty good job. It’s your own private robot drummer. There’s a lot of space here for the stuff that you’re working on in music and art. It’s like, “Let’s hold hands with robots and make art.”
It’s a cool idea. I’d love to see machine learning-generated music as an NFT. It would be unique.
With CoLearn pAInt, we’d love to learn more for our readers about how it works. What does the user experience look like at the intersection of the AI and machine learning aspects of the platform?
The way the platform works is everybody who’s taking part sees this generator with some uniquely generated randomness. These generators need a source of randomness to turn the art. If you imagine, it’s just an algorithm. If you didn’t have a random source, it would produce the same thing each time. There needs to be some random input for it, which people generate by drawing a little picture. The art doesn’t come out looking at the pictures. It’s used to see this generator. That’s the first step.
The machine learning model tries its best to make cool-looking artworks. We’ve trained the model on a data set that we’ve collected to open license images from the internet. What people see is they see what the model thinks is art. That’s a stage in its training. Sometimes it’s quite good at picking out patterns and things. It often gets eyes. In those eyes, an eye is a nice repeating pattern. It’ll stick eyes randomly on things. It has a bit of a sense of interesting colors and patterns and things like that. That’s the first stage of the voting.
Everybody votes. Everybody who’s staked their tokens to take part in the competition gets to vote for the winners. We take those winners through to round two. They get fed into the training of this algorithm, so we trade a bit more. We say that these are the cool ones and they produce more like this. You can see as a user from stage 1 to stage 2. It’s using some of the patterns or colors that were selected from stage one and trying to use them more because it’s learned that that’s what’s good and what people like to see. There’s another around and then the winning images from the third round, we mint them as NFTs.AI doesn't fail in the same way a human would, which often makes it difficult to understand sometimes. Click To Tweet
I love that rhythm of the process and finding ways to collaborate between an AI and it came in feedback. We don’t see that frequently. Maybe it happens a lot behind the scenes. In our circle, we don’t hear about that much or see that in action. It sounds like it’s going to elevate awareness of this concept. A lot of people might take that and be inspired by it to develop other projects or other concepts around similar ideas. It’s cool.
It reminds me of improv, something that the three of us have done. You’re in this troupe and sometimes there’s a call out from the audience of something random that maybe has nothing to do with anything that you had in mind but you roll with it. You get feedback based on what the audience likes and what they think is funny, and you go in that direction. As the troupe does more and more projects together, they get more comfortable with each other. Anything that they get thrown at them, they can make it awesome.
You’ve never had anything that stumped you?
That’s why improv is entertaining. Part of the entertainment of improv is the awkwardness that you get to watch performers having to figure out what to do. The more difficult, the better.
Let’s take a step back here and talk about the bigger picture of collective learning and its range of used cases. Where can this all go? What are some of the potential areas where this could be applied? How do we optimize this type of thing? What’s on your mind and keeping you up at night in terms of what’s next?
It’s useful in a huge number of areas wherever machine models can be used. One of the key ones we have looked at is healthcare. In healthcare, there are a lot of health data that have to be kept private. Nobody wants to share their private health data. You don’t want to upload that to some central server. There’s still a huge use of that data that could be made to diagnose things from scans, classify slides, and those things. This is something we’ve been focusing on. There are a bunch of techniques that you can use. Everybody should take some model train and bid on their own data and then pass it on. That way, you don’t have to share the private training data. You just get the benefit of this model in the center. That’s a used case that we’re excited about.
We’ve been in talks with some transport ones. Making cars now like supercomputers on wheels. There’s an awful lot of data there, but it’s the privacy concerns. Also, people say, “Why would I bother sharing my data? Why do I bother training this machine learning model on my car where it’s consuming my power?” The other thing is by using smart contracts, you can incentivize people to share their data to do these tasks. You can say, “Your update to this model is very useful. This model predicts when electric cars need to go to charging points. In reward for that, you can get some cryptocurrency.” Because it’s all a lot more seamless, we can incentivize things that otherwise wouldn’t have been economical to do.
It reminds me of my former life in the consulting world. I worked on a predictive analytics project trying to look at trends around homelessness for veterans. There were 60 different systems around the country and different data collection teams. It sounds like something like this could have helped them predict patterns around preventing homelessness among veterans.
That sounds like an important use case. Certainly, our machine learning models can often be complicated. It’s not as simple as this means this relationship. It’s a huge number of different factors. Machine learning models can find those patterns that a human wouldn’t be able to find. With collective learning, you don’t need to have one person that has more data. People with whole data can come in together and train the model together.
From the optimization side, is it repetition and practice or is there more to it?
In terms of training the model, do you mean how it gets better at its task?
Yes. Is it a matter of practice makes perfect?
Essentially, they are learning from their mistakes. The errors that they make get sent backward through the network. If you’ve got this network, the inputs come in at one end and they get transformed through layers of this network until you get outputs that classify. It’s like a scan, having this disease or not having this disease. When it makes a mistake, you can propagate the error backward through this network and work out which of the weights were wrong, and which of them ought to be changed. You do loads of steps like that. Gradually, your training progresses and the model gets better at capturing those patterns.
Machine Learning 101.
It’s a tricky topic to explain.
It sounds a little bit like The Lean Startup methodology.
Make something terrible and then iterate through the mistakes.
This is fascinating stuff. We’re excited about this unboxing coming up. I got one more question for you before that. We know you’re super sharp and you’re keeping track of the coolest stuff in the space with tech and art. What NFT projects and platforms, either around now or maybe that you foresee, stand out as potential game-changers here? We’ll call it long-term. I know 5 or 10 years is hard to project. What do you think?
It’s tricky, especially when the space is moving quickly. Lots of the generative art stuff looks beautiful. The algorithms you use to make it are interesting. Those Fidenzas from Art Blocks look nice. The pattern that they use to make them is amazing. There are beautiful things that can come out. Some of the ones that have already been around for a couple of years will still be around.
I don’t think CryptoPunks is going to go to zero anytime soon. A lot of them are fun where your NFT gives you access to something. I spoke to Josh about ZED RUN Horse Racing. I had to look at that after our conversation and I was like, “This is fun.” You buy your horse. You get to train your horse. We’ve seen a huge amount of personal engagement with people and things like Bored Ape Yacht Club where it’s not just a collectible thing, it also gives you access to something. It’s something you get to interact with.
Speaking of beautiful, there’s now the Mutant Apes that have come out. They are disfigured in a beautiful way. For those folks that don’t know, they also dropped M1, M2 and M3 mutant viruses. You can turn your Bored Ape into a mutant. The M1, the rarest ones, are selling for $1 million. Eight lucky people got airdropped $1 million. The M2s are going for $75,000. The M3s are going for $25,000. Either way, it’s not a bad airdrop, to say the least. I apologize if any Ape holders out there read this and I’ve transposed the information.
To your point, generative art is amazing and there’s an unlimited possibility. There are the dances now. It’s going to keep going and going. It’s about community and having fun. There’s so much about NFTs. The real game-changing stuff is functional and practical and something that maybe all of us won’t realize or even NFTs in the future. The community and the fun part of it are driving so much of it forward. It’s great to be part of that. What we’re excited about is to get a look at a demo of your project, show some of our readers what it’s all about, and learn a little bit more.
I’m looking forward to showing you all. I’m going to show you all the demo. I’ve got it running locally on my computer. We are running our live platform. We’ve had a great amount of engagement with it. People have staked 1.6 million FET tokens. That’s about $800,000.
That’s an impressive base of supporters already.
There is a bit of an application process to be part of the team.
There’s this idea of having a machine learning model and it’s something valuable and that you get to control it. To take part in this, you can stake some of your FET tokens. That’s the token for Fetch.ai, which is the company that Collective Learning is part of. You stake these tokens to take part. We organize that through a Dutch auction. It’s a Dutch auction organized through a smart contract.
People can be certain that they’ll get their FET back because they can verify the contract themselves. It’s a Dutch auction so it’s backward. You start with the high price and you gradually move it lower. People are going like, “Will I pay that much for a slot? Maybe I’ll wait until it gets lower. Maybe it will sell out.” I’ve made my bid with some FETs. I got some predictive slots. This is the first stage. This is how we select people. There’s a fixed number of slots available. In our event, we’ve got 200. That determines the proceeds people get from the final sales. At the end of this process, we’ll take our top three best-looking artworks and mint them as NFTs. We auction them on OpenSea. All those people who staked their FETs get their FET tokens back. They also get their cut of the proceeds from this. It’s getting people to use this idea that you take part in a machine learning model, you invest into it, and then you get rewards out from that process.
I wonder how many participants are going to be excited that they’re going to buy the art.
That would be cool. Also, when you interact with something, you then start to feel a sense of ownership of it, which is good for this thing. They’ll think, “I made that. I want to own it.” I’m going to switch around to the build phase. This is the first one where people are submitting their entropy to a seed generator. It needs a source of randomness. We’re generating this from all the users by getting them to draw a picture. They draw a picture and we use this as the entropy that’s used to seed the generator. It’s a fun way of coming up with some unique random input. It’s been amazing what creativity people had, the things you can draw in a little box. There’s been a lot of rockets, moons and attempts at drawing dogs and things.
You have to draw it. You can’t upload something.
Future iterations of this. We’ve been looking at algorithms where people will upload their images and that’s the stock of the art we’ll get. We’ll make something that looks like this.
For our readers at home, what Emma is drawing could easily be one of Gary Vee’s NFTs.
Here’s the rocket and there’s the moon.
As she’s drawing, there’s a little entropy generator measuring bar. It seems like it’s asking you for a certain amount of content. You’re at 38% after you draw the moon. With each stroke of the pen, it fills up this entropy generator ostensibly until you’re at 100% and then you’ve got what you need. It’s encouraging you to be a thorough artist to add some detail.
You can’t scribble briefly. You got to put some thought into it. That’s the entropy generation stage. Now let’s move on to looking at some actual artworks from this. There are three rounds of voting as people shape the artwork. Access it here so it’s looking at phase two. We’re refining the artwork. The first one here was the initial things that the model came out with. They’re cool. They’re weirdly a bit psychedelic and dream-like.
Some of them look like alternate universes or something.
It picks up eyes quite well so it often sticks eyes on things. We’ve got this terrifying sheep or something.
It’s almost like a bear.
It’s very animaloid but it’s also abstract.
Bright colors, shapes, and things.
I’m looking at the groovy one. Are there eyes in the groovy one too? It does seem like it likes to put eyes.
I could imagine that’s an eye.
It might have been some teeth. That one looks like it is inspired by a Bored Ape in some way.
I can see it. It’s got an idea from its training data set that you often have portraits where there’s a person in the middle surrounded by dark backgrounds. That’s what it’s going for with creepy eyes across.
When in doubt, add eyes.
Let’s do something abstract with as many colors as possible. We got this cool pattern here. This one feels a bit like those Rorschach tests where you’re seeing shapes in them. This is some kind of landscape. It’s a futuristic landscape.
They seem to be complex in terms of the intricacy of the lines, colors, depth and things like that.
Is there a marine element to it almost?
I’m excited. This can turn me into an artist.
It makes it easier for people to take part. You’ve got your machine helper and you’ve got the artificial intelligence that helps you out.
I have a chance at competing with Eathan in the art category with this type of technology to help.
Sometimes it creates complex ones. Other times, it goes for a simple pattern so we’ve got lots more here. Sometimes it’s like, “Let’s do something nice, simple and monochrome or let’s put eyes everywhere.” It’s amazing what range it can come up with.
This one below here looks like a human face. You said you see these with royalty-free images and things like this. Do you sense that this is close to the face of someone in a royalty-free image or it’s inspired by faces to come up with a humanoid form?
There are measures of how much it’s repeating what you train it on whether it’s coming up with actual new things. It’s performing fairly well. It’s probably a face that it’s come up with and not one from its training data alone. I can’t be sure. I haven’t gone through and memorized all of them, but it’s got the idea of like faces. They have two eyes, a nose and a mouth.
The bottom line is it’s impressive. It’s cool and interesting stuff. If you think about it through the lens of NFTs as they exist now, a lot of people will be interested in owning or trading when you look at it through that lens alone. Also, get a lot of value out of it as part of the creative process. What’s interesting about phase two is we’ve talked a lot about how a lot of these popular projects, at least at this particular zeitgeist, these collectible items that are humanoid and have the potential to be a character that you could put into a story or tell stories about. Also, have your own personal narrative around.
Seeing this humanoid one here makes me think you could make a whole collection of these. Looking at the picture itself invokes a story. You could make the story up about that character. It’s like a little bit of a deformed face. It looks like he’s in some kind of previous-century clothing. You could make an amazing story up about that. If you could have several of these AI-generated characters to have stories around, it could be a beautiful collection.It's amazing to see the kind of creativity that people have, even when just given a little box to draw in. Click To Tweet
That would be fascinating. That would be cool to see what you can create out of that. It’s always amazing what people can read into these things. That’s the kind of interesting interaction between humans and machines.
These are random seeds for our brand machines.
These are sample images generated before, so these are some that can come out with art number one. These are the ones we used in our events. People selected these top five as their favorites. We’ve got Creepy Sheep. We’ve got this Starburst Landscape.
How did the titles underneath those become made?
The titles are randomly added by us. Art needs a caption. We did look into automatic captioning, which would be an interesting thing to add phase two of this. You can now get machine learning models that will take an image and try to say, “That’s two dogs on some grass,” or “That’s a landscape with the sea in it or something like that.” For the moment, these are words that we’ve chosen humorously to caption the art. It could give us a handle for it.
This is phase two. We add in the winner from the previous round but you can also see the art that’s generated is inspired by some of the colors and shapes from the ones that it had as the winner. These five were the winners from the previous round. You can see this one coming up with some of the same shapes. That’s a bit of a repeating pattern there. This one has been greatly inspired by this one.
That is fascinating because The Seed was a very abstracted face. It’s like somebody threw eyes on something but then the next phase you would think would be more abstracted, but it’s less abstracted where you have those eyes turned into a face behind them, and then another face behind the other ones. That’s pretty cool.
It’s taken these things and built on them. There are more ones from that where it properly turned these into faces. Let me throw that out a bit more. These ones are inspired by the same pattern. Here, it carried on down that path like, “I put some eyes in this and I put some more eyes.” I’ll add a nose and following down that generative process.
You might think, not knowing how the system works from the outside that it would be a relatively simple approach of blending this image and that image. Then you get something maybe less than what the original source was. It’s so clear that it is an evolutionary process and it is refinement, and an improvement on the concept. There’s something special in there happening, and that’s the AI and machine learning element of this whole thing.
I’m digging Watery. That one is cool. It’s these ghost-like humanoids levitating over some marshy water swamp. That’s beautiful.
You can see that it has been inspired by one of the winners from the previous round. It’s getting patterns from this and riffing off that and combining it with other colors, shapes and things. It’s amazing where it comes up with. There are all sorts of colors, shapes in here, and then sometimes, it comes up with another version of that. Maybe it looks good but let’s put it on some hot background like they’re walking on lava. That’s how the steps go with the art so it gets refined and gets more towards what people selected in the previous round. I’m going to switch over to our final round, where I’ve got some fresh generators.
I realized that these AI algorithms are on some psychedelics injected into the mainframe.
It’s definitely taking something with all these cool colors.
We get our own versions of this co-creation canvas.
In the final stage, I’m going to open up the third round where I’ve got some fresh generated images and you can all pick your favorites. We will choose three and we’ll choose a fourth one which we will have a giveaway to a lucky reader whose favorites are OpenSea auctions. The top three get sold. If you favorite our auctions, you could be in for a chance to win the fourth NFT that we’re going to generate on the next page.
That’s amazing. We’ll shoot all the details out on socials for folks to participate.
I’m trying to decide whether we choose or you should tell us which one you associate with to us and we can fight over them.
I’m at the final stage on this and we’ll see some that we can select. Choose some favorites. We’ve got a lot of colorful ones here. I’ll have a scroll through and we can go back and pick some favorite ones.
KC has some of our colors in it. Keep going.
It’s like a ‘90s style.
We’ve got the randomized everywhere. We’ve got this minimalist one. It’s like the opposite of flashy.
It looks like cactuses or something. I like that one on the lower right coming up here with the pinks, the oranges and greens there. It’s solid.
Eathan does love the color pink though.
It’s a strong color. It picked some nice colors.
Your science background is coming out as well. It looks like something under a microscope.
It does, doesn’t it?
It’s like a water bear’s house or something. Do we each pick one? Is that how it goes?
I pick that one. That’s mine.
That one is selected. It’s like a weird frog creature.
We’re going to get that.
It looks like one of my background characters, doesn’t it?
It does. That’s like the Jabba the Hutt knockoff.
I’m in. I’m going to go with my gut.
We might gift that to the Frogman guys.
We need to top that from the Frogman Discord channels for sure.
I got gruesome but in a very cute way.
I didn’t even look at my title. Fanatical? Is that what it’s called?
Jeff is up.
That character is awesome. Scroll back up there. What do we have?
We’ve got stars.
I can’t believe we found a Jabba the Hutt knockoff for you. I got to go with Flashy. That’s my vibe. It’s stripped-down and flashy.
Flash master Jeff.
That one is great. It’s tasteful. It’s minimalist and it’s amazing to see what people come up with to see in these kinds of pictures
You’re going for this black and white cactusy type thing, Jeff?
I see crows in there. It’s cool to me.
We got to pick one now for the readers.
I’ll pick on for the readers. Creepy Eyes?
When you say Creepy Eyes, you’re doing it a disservice. I see a puppy or a dog there like a big, furry, and friendly dog that’s looking at you. He’s got some extra eyes, that’s true.
I thought that was a nose and that’s the mouth. It’s looking at you with its doggy eyes.
He’s a sweet boy.
I see rainbows.
We’ll put him in the running. What else? Can you scroll up again?
That’s called Bloody, by the way.
Steadfast is also pretty dope.
I like Fast.
Fast is cool too. Edge of NFT and Fast.
It’s representational. I see a little house there.
It’s a house surrounded by all this psychedelic stuff.
I like the Entertaining one. It’s understated but also sophisticated.
It’s like the beak of a bird.
I see a present wrapped with a bow on top. There are so many to choose from.
What was the Fantastical one next to Bloody, the dog? It’s a bloodhound.
Bloody the Bloodhound. That’s cool. It’s like a shape or something.
I began thinking of Edge of NFT colors.
Let’s look at colorful.
We can spend all day here by the way. This is so fun and cool.
I don’t know what it’s thinking here but it’s picked great colors for it.
I want to have a cup of tea, take a bath now, and look at art.
It’s got the idea with landscapes that you have some murals in the background but its ground is pink. Maybe the ground is pink. We’ve got the auctioneers for Bloody Bloodhound.
We’re going to do the Bloody and Creepy Eyes. Do we all agree on that? I would just be going along.
We’ve got this crypto eight-ball or something.
That’s a character. This is called the Paradox of Choice. There are many things that you can pick.
I love those colors. They’re bright and vibrant.
We selected Tasty earlier on. That was potential.
I like the colors. It matches our colors.
Should we make our fourth pick then to be Tasty? We’ll have one for each of you three and Tasty is the one for the readers? Are there some late votes for Bloody the Bloodhound with his sad doggy eyes?
I like characters. I’m a character lover. I feel like there’s a character in the story in there that I have an affinity for.
It’s about a boy and his dog.
Josh, you’ve got to be the tiebreaker on this between Tasty and Bloody. It’s up to you.
Do you want Tasty or Bloody, Jeff?
We’ve talked a lot about eyes on the show. That’s how it goes to Bloody. Maybe we can ask our followers what they see there and make that a fun thing.
Come up with a story about Bloody the Bloodhound.
That was not easy.
Thank you. That was a fun process.
This is not Infinite Scroll but it’s getting close in terms of the number of arts.
We finally chose. The final stage is we take the top three winning images and mint them as NFTs and then everybody gets their rewards. We got a little winner’s showcase to show people what the final ones are. They then go off to OpenSea and get minters NFTs and people get the rewards for that.
What’s the date that they’re going to go on OpenSea?
Monday, the 6th is the opening of the auction. That’ll run for two weeks. We’ve put a lot of interest already in this whole process. It’s going to be very interesting.
There will still be a chance to go to OpenSea and check out the auction. Maybe get one of these amazing pieces of historical art recognizing a brand new revolutionary project in the space. This is pretty awesome. I have to say I was looking forward to seeing the demo and understanding what you all have done here. It’s magical. I hope that you have a sense of fulfillment from architecting this amazing project.
It’s been fun to show it to you and see what people see in the arts. It’s been fun to build and it’s fun to show it to people.
We’re a little over mainly because of our indecision because of all the magical art. Do you still have time to hang out with us and do some Edge Quick Hitters?
Sure. Let’s go for it.
Edge Quick Hitters is a fun and quick way to get to know you a little better. There are ten questions and we’re looking for short single-word or fewer-word answers but feel free to expand if you get the urge. Are you ready to dive in?
Let’s go for it.
Question number one, what’s the first thing you remember ever purchasing in your life?
It was probably sweets or something. The first purchase, I immediately eat. These were cool in 1995. It’s a little friendship necklace. You get a necklace that’s in two halves. There are two necklaces and you give one to your best friend. They form a little pair. I was thinking about that. I was like, “Somebody can make an NFT version of that.” You make them as a pair of tokens.
That’s an option. It also foretold your future co-creating NFT art right there.
I probably didn’t think of it at the time. Maybe to take off, it would need to have more ten-year-old girls buying NFTs.
That sounds cool. I like it. Question number two, what is the first thing you remember ever selling in your life?
I and my brother had a scheme when we were kids to get pirated CDs off the internet and then sell those. It was weird. At that time, I remember being like, “Files on the internet are not worth anything. Once you turn into a physical form like a CD, people pay for it.” It’s amazing how now that’s completely gone away. CDs are now almost old-fashioned. Now we’re used to the idea that NFTs are purely virtual but they’re still enormously valuable.
Josh, did we include the anti-piracy disclosure at the beginning of this? Let’s double-check that. Number three, what’s the most recent thing you purchased?Technology bringing machines and humans together has resulted in all kinds of innovative breakthroughs. Click To Tweet
Apart from groceries, I purchased a slot to take part in the CoLearn pAInt event because it’s been exciting to build it and I want to take part.
That would be a bummer if you didn’t have that slot. Question number four, what’s the most recent thing you sold?
With the pandemic and the house clear out, I’ve sold some spare junk on Gumtree.
Your background is looking very feng shui.
You just can’t see the floor.
I feel like we’ve all accumulated too much during this time. Question number five, what is your most prized possession?
My most prized possession is my bike because I got it from my mom. It’s a nice 1980 steel road bike. Eventually, it’s going to get nicked and I’ll be sad, but it’s fun to ride.
Do you ride mostly for transit, for short trips and whatnot?
Yeah, pretty much. Since the pandemic, we’ve been fully remote. I don’t have to go into the office anymore. On short trips, I’m always on my bike.
It’s one of the most efficient forms of transportation.
Question number six, if you could buy anything in the world, digital, physical, service and experience that’s currently for sale, what would that be?
If money was no object, one of those trips into space or to the moon would be cool. When somebody says, “One moon,” I want to be able to reply, “Tuesday.” I’d take a trip to the moon.
That’s special. That would be something else and it’s not easy to come by either. Question number seven, if you could pass on one of your personality traits to the next generation, what would that be?
I like learning new things and that’s quite an important trait to have. It’s good to be able to learn about new stuff because there’s so much cool and new stuff coming out all the time. That’s the trait I’d most like to pass on.
We’re all the beneficiaries of that trait with this amazing project. On the flip side though, if you get eliminate one of your personality traits from the next generation, what would that be?
Lack of chill. We’ve been building this project. When you’re about to release it, someone goes, “Is it meant to do that?” You go, “No.” It’s all a bit of a last-minute panic sometimes getting things out. Often, I say to myself, “Be more chill. Relax. It’ll turn out fine.” It pretty much has done in the end.
Question number nine, what did you do before joining us on the show?
I prepared the demo. I was like, “Let’s make sure I get rid of those bugs.” Also, we launched the second voting phase of CoLearn pAInt. There was a lot of making sure everything on that was ready to go. That’s what I did.
I’m glad you were able to carve some time out to be with us here. Last one, question number ten, what are you going to do next after the show?
I’ll quite possibly crack open a beer.
Thanks for playing some Edge Quick Hitters with us. We appreciate it. Overall, thanks for taking the time to spend with us and share with our readers this amazing project. Everybody is going to get excited. I implore all of our readers to take a minute and head over to YouTube to check out the video of the demo. You’ll get the full expanse of everything we were experiencing if you see that video, even though Eathan did a great job explaining everything. We want to make sure you have an opportunity to share your social handles or websites where people can go to learn more about you and this amazing project. Where should people go?
That’s great. We do want folks to keep an eye out on our social channels for details around the amazing giveaway and contest that we’ll put together for this artwork that we selected. That’s going to be a lot of fun. Keep an eye out for that. We’ve reached the outer limit at the Edge of NFT. Thanks for exploring with us. We’ve got more space for adventures on the starship. Invite your friends and recruit some cool strangers that will make this journey all so much better. Go to iTunes, rate us and say something awesome.
Go to EdgeOfNFT.com to dive further down the rabbit hole. Do you want to help co-create Edge of NFT with us, got guests you want to see on an episode, questions for hosts or guests, or an NFT you’d like us to review? Drop us a line at Contact@EdgeOfNFT.com or tweet us at @EdgeofNFT to get in the mix. Lastly, be sure to tune in next episode for some more great NFT content. Thanks again for sharing this time with us.