One was no longer limited to specialized datasets, now we were dealing with ALL the images. Now, one could simply write a text prompt describing the desired image, and results would magically appear. It wasn’t until the advent of text-to-image generators like DALL♾ that my mind was truly blown. So what?Ĭreating “hybrid” images via transfer learning on disparate datasets ( like cross-pollinating vases with beetles) resulted in strange and interesting new possibilities, but the process is labor intensive and also slightly disappointing. After training a StyleGAN model on vases I could generate more images of vases. My exploration of the vase form using GAN’s was fun, but ultimately a little disappointing because generated images are limited to the combination of possibilities within a strictly-defined dataset, for example photos of vases. ![]() Featuresįull-text search, including accession numberĬustom similarity algorithm with combined weighted terms (can be adjusted)Įmbedded JSON-LD ( VisualArtwork) for better SEO and sharing Next.js templateīased on ( Website, UI Components), which is an implementation of Radix UI with Tailwind and other helpful utilities. This project has been deployed on Vercel at DatasetĪll data was collected via the Brooklyn Museum Open API. Powerful services & frameworks like Elasticsearch & Next.js make it possible for museums to easily build performant, responsive and accessible faceted searches for their online collections. Update This project has been deprecated, the new project is: museum-nextjs-search
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |