In my nano tips series on ChatGPT so far, I’ve covered Data Storytelling and Visualization, and Technical Prompts. Since “ChatGPT can now see, hear, and speak”, I thought it would be a good idea to create some additional videos to explain those multimodal capabilities.
In this course, you will learn how to configure advanced data analysis in ChatGPT, execute multistep document interpretation, join multiple tables with prompts, and analyze data using linear regression.
Along the way, I will demonstrate a handful of skills useful for designers such as how to extract and organize text from images, code an app element from an image, create UX design annotations, conduct text-to-image experimentation with DALL-E, and much more.
Also, by the end of the course, you’ll be prepared to browse the internet with Bing and utilize AI-enhanced voice conversations and commands.
Here is a quick sneak peek into my favorite video from the course:
Extract and organize text from images from Nano Tips for Navigating Advanced Data Analysis, Vision, and Voice in ChatGPT with Lachezar Arabadzhiev by Lachezar Arabadzhiev
If you have any questions, feel free to do one of the following:
🎥 Explore my LinkedIn Learning courses on storytelling through data and design!
📰 Follow me on LinkedIn, and click the 🔔 at the top of my profile page to stay up to date with my latest content!
✅ Subscribe to receive the by-weekly “My Next Story Is…” newsletter. Yep, it is my brand new newsletter!
I've used Zapier for years to automate small tasks and create the lead generation system…
When I started creating custom GPTs for various tasks, one of the areas that I…
Over the past few months, I've had the opportunity to dive into Canva Magic Studio,…
It has been a while since my last post, but I finally managed to wrap-up…
Although cliché, the phrase "a picture is worth a thousand words" has never been more…
It is no secret that Canva has gone through an incredible transformation as a product, and…