![]() The neural network can even repeat the process on it’s own rendered images, in a sort of feedback loop, so that it refines the dogs, or whatever, that it has created itself from anything that remotely resembled a dog. The computer finding dogs where there weren’t any, because it was looking for dogs. The computer sees things that aren’t there, and renders them. The result is something like seeing images in clouds or while tripping balls on acid. That is already interesting, but the Google team discovered that the process could be reversed so the neural networks would generate images of animals, trees, and so on, by asking them to enhance the features they were looking for. The aim was for a computer to look at an image and be able to say something like, “That’s a golden retriever”, or “That’s John Connor…”. The neural networks are honed by exposing them to volumes of images, such as of animals, trees, or buildings, so that they are better able to search for the common features of those subjects. Essentially, it’s a filtering process that starts with easy stuff like detecting edges, then shapes, and moving on to identifying what is being seen. At first I was modestly intrigued, but now I’m starting to see the threat it poses. ![]() Google developed an artificial neural network to interpret imagery. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |