Skip to content


Prompt techniques and Function Calling

Two interesting prompt techniques I've come across.


Source :

Image : Tipping

Also see :

Threatened Lives

Source :

Image : Lives

General Prompt

Source :

Image : System Prompt

Function Calling Blog

Worth reading in more detail

Source :

UI, Dystopia and Rebyte

Some things I've found I really like.

Maggie Appleton's writings

Really resonate with me around cultural anthropology, product and AI. Nice that she is doing this in the realm of design. I also enjoy her takes on non chatbot UIs.

antilibrary is a cool concept.

She works at Elicit which sounds interesting also

Need to do more research on this piece for sure.

Tools for thought

More tools for thought - seminal essay

Microsoft Ongoing Research

Interesting set of current research ongoing in a grant funded by microsoft.


Broader presentation here on future of work. Is also on my reading list page.

Includes the ideas of micromoments and microproductivity. In fact, this is really a good indicator that decomposing these tasks into smaller tasks in the GIST framework or for agents in general is something that AI is really good at.

Finally the productivity report itself is here :

Idea Spaces

How many ideas can be generated? As product managers or design thinkers, divergent thinking requires coming up with new problems, opportunities and solutions. Then alot of the work is convergent, whittling that initial set of ideas down through various techniques such as affinity mapping.

But at it's base, how many ideas does an idea space have? If we had groups of people or AI tools, how many ideas could we generate?

Paper 1

Opportunity Spaces in Innovation: Empirical Analysis of Large Samples of Ideas

Makes reference to parallel-search of opportunities in a tournament of ideas as a basic approach. Three key questions :

  • how much redundancy results from parallel search?
  • how large are the opportunity spaces?
  • are unique ideas more valuable than ideas that are similar to others?

This is followed by a brief but interesting literature review of the opportunity and idea generation space. Then the core dataset is basically from groups of students in a class generating ideas and also then blind scoring these ideas.

As an interesting aside for the next part they developed a technique to compare ideas for similarity pairwise. But the brute force method is totally possible with LLMs

Also some additional interesting approaches to clustering ideas which may be totally superceded by modern search and information retrieval paradigms but maybe not.


Some conclusions from the paper

  1. When a large number of independent efforts to generate ideas are conducted in parallel, the redundancy is quite small even for narrowly defined domains.

  2. Using redundancy as a clue, the total number of unique ideas in a narrowly defined domain is ~1000 and in a broadly defined domain is ~2000

  3. Ideas that are more distinct from other ideas are not generally considered more valuable

This table below is the interesting piece for estimation :

Idea Space

Product Equations

What if we defined the possibility space for AI modules as a Cartesian product of capabilities, containers and modes? We'd end up with a list of tuples, each of which had a specific combination.



modes = {linguistic, visual, audio, gestural, spatial, tacticle, olfactory, gustatory, multimedia, kinesthetic, ...}

capabilities = {...}

containers = {...}


Number_Of_Combinations = |modes| * |capabilities| * |containers|

Earlier posts identified three concepts: capabilities, containers , and multimodal literacies.

What if we treated the idea of modular lego as a math-like problem for product specification? Something like this :

Mode + Capability + Container = Module

Multimodal Literacies

MultiModal 4

Multimodal literacy - taking in different types of inputs and producing different types of outputs - is an increasingly important feature of LLMs. A few obvious modes are text, speech, audio, image, and video. But I thought I'd just ask ChatGPT to help me learn more. And here's what I got back.

MultiModal 1 MultiModal 2 MultiModal 3

For far more interesting and technical posts on multimodality, see Multimodality and Large Multimodal Models or Multimodal Large Language Models: A Survey.

Capabilities and Containers

As we implement AI features into our products, we need a mental model that helps make sense of this domain. A shared language is a starting point for sense making.

Capabilities are core actions that generative AI can accomplish and are usually represented as verbs. Examples include: summarise, create, retrieve, research, and interact. Capabilities are a bit like Jobs-To-Be-Done for AI.

Containers are parts of a user interface or architecture that extract, transform and load information. These include search bars, notes, wizards, and databases.

Capabilities are connected to containers in a many to many relationship. Thinking of AI features as capability-container modules that can be reused in different parts of an application, just like a design system does for user interfaces, helps drive business cases.


Capabilities applied to containers encourage transferability