If you’re new to kennethcurtis.com, I’d first like to welcome you to the new design and hope that if you get a chance you’ll take a moment to look around. One of the many goals of this site is to create an area for artists of all kinds. It doesn’t matter if their chosen area is in fine arts or applied arts and even if they fall between the two, all are welcome.
As the function of the webmaster/creator of this site I get the honor and privilege to invite Greg Senn, one of my closest friends and favorite artists to be the first interview. As well as being a certified scuba diver for nearly twenty years, he has been an artist and art professor for about thirty years. Greg has primarily based his work on casting and metal work and it truly remarkable some of the pieces that he has created. His work ranges from serious to playful and he seems to revel in the freedom that art gives him.
Dmitry Ulyanov an engineer at Samsung AI tweeted something remarkable two days ago. He was one of the contributors to the paper, so I assume that he was one of the inventors of this new form of digital manipulation. From his tweet,
“Another great paper from Samsung AI lab! @egorzakharovdl et al. animate heads using only few shots of target person (or even 1 shot). Keypoints, adaptive instance norms and GANs, no 3D face modelling at all. “
First let me just acknowledge that I’m not sure if this could be considered art in the traditional sense, but probably in the near future movies, television, and video will all have to be second-guessed. You would have to wonder if the dignitary making a speech is really a person. Is he actually speaking, or is it an AI? It will be entirely possible that he had his face digitized and the subsequent speech could have been created for him. That’s possibly the best case scenario.
The relative ease that this can be done is what is so impressive. You don’t need a 3D model to create the talking head, you just need a photograph, I guess the more photos you use the better the result.
In the video on YouTube it shows six photos of an unnamed person that had been turned into a talking head. In some of the other examples they used selfies from Facebook to create a girl talking. The really interesting part, and is probably what excites me the most, is that they can do it with just one image. Think of it, take a photo and turn it into a talking head. Some of the examples that they used was Marilyn Monroe and the Mona Lisa.
Here is the paper written about it:
“Several recent works have shown how highly realistic human head images can be obtained by training convolutional neural networks to generate them. In order to create a personalized talking head model, these works require training on a large dataset of images of a single person. However, in many practical scenarios, such personalized talking head models need to be learned from a few image views of a person, potentially even a single image. Here, we present a system with such few-shot capability. It performs lengthy meta-learning on a large dataset of videos, and after that is able to frame few- and one-shot learning of neural talking head models of previously unseen people as adversarial training problems with high capacity generators and discriminators. Crucially, the system is able to initialize the parameters of both the generator and the discriminator in a person-specific way, so that training can be based on just a few images and done quickly, despite the need to tune tens of millions of parameters. We show that such an approach is able to learn highly realistic and personalized talking head models of new people and even portrait paintings.” –https://arxiv.org/abs/1905.08233v1
The one photograph statement isn’t exactly true, because it takes some comparing and integration of other images and video. If I understand the process completely. They take a photograph, or more than one, integrate into the AI of a person moving. In layman comparisons it could be considered mapping the face to the movements. The difference is that it appears to be all done by the computer.
I’m still trying to digest the ramifications of this new technology. At the moment I read that they are not sharing it for obvious reasons. A person could easily be impersonated, although there are some limitations at the moment. The technology is rough. Right now the videos show distortion and could easily be detected as being computer generated. That’s right now though. I could have said the exact same thing about cgi in movies no more than ten years ago. How long will this technology be rough and unrefined? Will we even know if they’ve worked out the bugs? These questions will only be answered in time, and probably less time than we expect.
I want to first say that I feel good about where the site is at right now. I think I’ve got most of the elements in and I’ve started to update some of the older entries. Basically that means that the site is almost open for business, so to speak. I won’t be ready to announce it on twitter for a while, at least until I’ve had some time to make sure that there aren’t some errors or problems that I haven’t noticed. That usually takes a few weeks.
I’m not going to go through the long list of things that I’ve fixed right now, but I will mention some of the more difficult things that I’ve managed to correct. Most of the difficult items have to do with formatting of the layout, and to be more specific, making sure that there is continuity between the blog and the static pages. Since they both use Bootstrap you’d think that wouldn’t be that much of a problem, but it turned out to be something that I worked on for days. To give a quick example was the sidebar on some of the pages were different from each other, and then they collapsed differently as well. Since I’m used to, but not an expert in Bootstrap, I first had to fix one then find out why the other was doing messing up. Like I said it took me days to figure it out.Continue reading →
So I just wanted to draw attention to a site that I have been using for about ten years. The site that every web designer should know, and my guess every true designer already knows about and that is codrops. This site is about the best that I’ve found for css and jQuery effects that will make every site just a little bit better.
I first found the site through a regular search for hover effects back around 10 years ago. Since then it’s been the my one go-to site when I want to add something interactive to my sites. In fact I used to send my students there for them to get ideas for their sites.
To be completely honest I don’t know very much about the site, besides what I’ve already mentioned, but they are exceptional. This is from their about page,
“Codrops is a web design and development blog that publishes articles and tutorials about the latest web trends, techniques and new possibilities. The team of Codrops is dedicated to provide useful, inspiring and innovative content that is free of charge.
What started as an experimental blog became an exciting playground for sharing the passion for web design and web development.
The web is innovating each and every day, pushing the boundaries of how websites are built from the fundamental structure to the most delicate interaction effects. And on Codrops we want to share some of that.
We are always looking for creative minds to join us, write for us, explore, collect, engage… So, if you would like to become part of Codrops, please contact us! “
-Taken from the site on May 18, 2019 without permission
Again, there are some things that I want to stress here. The first is that the site is special that it has great content and I think every designer should be going there. The second is, is that I am very grateful that there are still sites like codrops on the web. In my opinion it exemplifies what the web is about. The last thing is that I want to bring attention to them, they deserve it.
There are several aspects to this site that I’m trying to integrate into my idea of what Kennethcurtis.com is about, and I’m hoping that given some time to normalize this site that I’ll be able to have guest interviews. The people at codrops are on the top of the list on people that I’d love to interview. I know I’m talking about a site and saying that I’d like to interview them… the truth is I am really curious about one of the contributors named, Mary Lou. I’m not actually sure if that is a real person or not, but I’d like to invite her for an interview.
I don’t expect that there will be very many people who have stopped by to see how the site is coming along. In fact I don’t that there will be anyone since I haven’t shared the domain name with anyone that I can think of. Having said that though does not take into consideration people that would come here just to check up on me or those that got here through searches. I saw on duckduckgo.com had this site listed there a few days ago.
Now the update. I finally have had true internet access for the first time today after nearly five days of very spotty connections. It felt good to actually be working on the site. So today I tried to normalize all the pages… I think what I’ve trying to say is that I added all the base pages in and double-checked links. I still need to change the default images in the headers. It really doesn’t make sense that only two pages are unique.
I also completed some of the standard pages that sites should have. You know, the pages I’m speaking about are the “terms of use” page and the “privacy policy” page. They aren’t glamorous but necessary and kind of fun to do.