All Posts

    Creativity is considered to be a bastion of humanness and somewhat outside of the realm of artificial intelligence.  But AI can be used to generate variations of artistic themes that appear to be creations of their own. 

    But I would say this computer based creativity is a reflection of the creativity of the programmers who are building the algorithms that simulate this most human trait.  

    Openai has created a program called DALL-E 2 that can create or change images based on textual descriptions. The striking capabilities of this technology has vast implications for creativity as part of the field of  synthetic media.  Deep fakes - AI generated images, video and audio based on source files - can create videos of people saying and doing things that have never happened.  DALL-E 2 adds a very interesting capability by interpreting textual descriptions and combining concepts with an artistic style or enabling visual elements to be added that are consistent with the subject style and blend in various natural effects such as lighting and shadows.  The result is amazing.  

    The key to these capabilities is having the correct training data (no surprise) - images labeled with concepts that tell the algorithm the characteristics of a cat for example. One image was of a “cute cat” (one could argue that most cats are pretty cute - that is if you like cats) - the program had to be trained on cute cats.  But the question I have is whether this is “post-coordinated” or “pre-coordinated“ - was the training on “cats” and separately on “cuteness” (post-coordinated - just as ecommerce sites use separate facets to filter products)  or on “cute cats”. (So called pre-coordinated or combined into a single concept)  My guess is the latter due to the subjectivity of what “cute” is.

    Image credit: https://daleonai.com/dalle-5-mins

    When working on a digital asset management project a couple years back we had to define ambiguous and subjective attributes.  One image of lollipops with faces on them was interpreted as cute by some and creepy by others.  

    Images are notoriously difficult to describe using text descriptions.  But consider the labels as handles on existing images rather than descriptions of the image.  Therefore the training data - the images representing concepts - needs to be carefully chosen to define the inputs to DALL-E. This is generally true of any AI technology.  In many cases the data is more important than the algorithm. Humans select training data and ultimately humans have to label that data.  

    The application of algorithms to creative endeavors allow humans to use judgment in selecting and evaluating various AI generated outputs.  This is a great example of augmentation of of distinctly human abilities that can improve human creativity and productivity.  They depend on the right training data correctly labeled just as in every application of AI.

    Here's an explanation of the CLIP model that DALL-E 2 uses to connect text and images:

     

     A gallery of some DALL-E 2 generated art:

    https://www.instagram.com/twominutepapers/

    Another highly geeky deep dive into OpenAI's paper on DALL-E 2:

     

     

    Seth Earley
    Seth Earley
    Seth Earley is the Founder & CEO of Earley Information Science and the author of the award winning book The AI-Powered Enterprise: Harness the Power of Ontologies to Make Your Business Smarter, Faster, and More Profitable. An expert with 20+ years experience in Knowledge Strategy, Data and Information Architecture, Search-based Applications and Information Findability solutions. He has worked with a diverse roster of Fortune 1000 companies helping them to achieve higher levels of operating performance.

    Recent Posts

    [Earley AI Podcast] Episode 41: Ian Hook

    Ian Hook on Advancing Operational Excellence with AI and Knowledge Management - The Earley AI Podcast with Seth Earley - Episode #041 Guest: Ian Hook

    [Earley AI Podcast] Episode 40: Marc Pickren

    Search Optimization, Competitive Advantage, and Balancing Privacy in an AI-Powered Future - Marc Pickren - The Earley AI Podcast with Seth Earley - Episode #040 Guest: Marc Pickren

    [RECORDED] Product Data Mastery - Reducing Returns to Increase Margin Through Better Product Data

    Improving product data quality will inevitably increase your sales. However, there are other benefits (beyond improved revenue) from investing in product data to sustain your margins while lowering costs. One poorly understood benefit of having complete, accurate, consistent product data is the reduction in costs of product returns. Managing logistics and resources needed to process returns, as well as the reduction in margins based on the costs of re-packaging or disposing of returned products, are getting more attention and analysis than in previous years. This is a B2C and a B2B issue, and keeping more of your already-sold product in your customer’s hands will lower costs and increase margins at a fraction of the cost of building new market share. This webinar will discuss how EIS can assist in all aspects of product data including increasing revenue and reducing the costs of returns. We will discuss how to frame the data problems and solutions tied to product returns, and ways to implement scalable and durable changes to improve margins and increase revenue.