Automating - Artwork Catalog
Building an Artwork Catalog Without Losing My Mind
Over the years, I’ve accumulated hundreds of photographs of finished artworks, studies, and works in progress. Like many artists, I knew I should have a proper catalog—titles, descriptions, mediums, dimensions—but the thought of manually writing metadata for hundreds of pieces was overwhelming.
So this year, I decided to approach the problem the same way I approach my studio work: build a system that supports judgment instead of replacing it.
What follows is how I created a first-pass catalog of my artwork using a local, privacy-respecting AI workflow—one designed to save time while keeping creative decisions firmly in my hands.
The Real Problem Wasn’t Technology
The biggest obstacle wasn’t software. It was scale.
When you have a few dozen pieces, writing descriptions by hand is manageable. When you have hundreds, it becomes a barrier—especially when much of the work still requires careful human review anyway.
What I wanted was:
A neutral, professional first draft for each piece
Language suitable for galleries and licensing platforms
A workflow that avoided obvious “AI-sounding” copy
A system that flagged uncertainty instead of inventing confidence
In short: help with the heavy lifting, not decision-making.
A Local-First Approach
Rather than uploading my entire archive to a third-party service, I built a local, first-pass cataloging pipeline on my own computer.
At a high level, the process looks like this:
Curate the input
Generate neutral descriptions
Flag uncertainty instead of guessing
Export everything to a spreadsheet for review
The result is a working catalog—not a finished one—and that distinction matters.
The Tools Behind the Process
For those curious about how this was built, here’s a high-level look at the tools involved. None of these are exotic, and most are either free or already part of a typical creative workflow.
Local AI Model (Vision + Text)
At the core of the system is a locally running vision-language model managed by Ollama.
Ollama allows AI models to run directly on your own machine, which means:
No uploading artwork to external servers
No ongoing usage fees
Full control over when and how the system is used
The model itself performs a single task:
generate a conservative, factual description of each artwork based only on visible elements.
It is intentionally constrained to avoid interpretation, symbolism, or confident guesses.
Python for Automation
The workflow is orchestrated with Python, which handles:
Iterating through folders of artwork images
Sending each image to the local AI model
Enforcing strict output rules
Capturing results as structured data
Writing everything to a CSV file
Python is well-suited for this kind of “glue work”—connecting tools together into a repeatable process without requiring a complex application or interface.
CSV + Google Sheets for Review
Instead of sending results directly to a database or website, the output is written to a simple CSV file.
That file is then opened in Google Sheets, where it becomes the real workspace:
Reviewing and editing descriptions
Confirming or correcting mediums
Adding dimensions manually
Flagging works that need deeper attention
This step is critical. The spreadsheet is where human judgment takes over.
Squarespace for Publishing
The final destination for much of this content is my website on Squarespace.
Having structured, reviewed descriptions makes it much easier to:
Populate portfolio pages
Maintain consistency across works
Reuse text for submissions or licensing platforms
The AI never publishes anything directly. It only prepares material for review.
Why This Still Requires Human Judgment
This process does not replace curatorial thinking.
In fact, it clarifies it.
I still decide:
Titles
Final descriptions
Medium classifications
Dimensions
Which works are ready for public presentation
What’s gone is the blank-page problem.
Instead of staring at hundreds of empty fields, I’m responding to something concrete—editing, refining, and correcting. That’s a far better use of creative energy.
Avoiding “AI Voice”
One of my biggest concerns was avoiding language that would make it obvious the text was machine-generated.
To address that, the system is explicitly instructed to:
Write as if authored by a human cataloger
Refer to “this artwork” or “this piece,” never “the image”
Avoid explaining uncertainty
Avoid inflated or generic art-crit language
The goal isn’t to sound impressive.
The goal is to sound normal, professional, and usable.
Why This Matters
Cataloging is invisible work, but it shapes everything downstream:
Gallery submissions
Licensing platforms
Archival clarity
Website organization
Long-term professional sustainability
By building a process that scales, I’m not just saving time—I’m making it easier to keep my work visible, legible, and usable over the long term.
This approach won’t be right for everyone, but for me it strikes the right balance between automation and authorship.
What Comes Next
The current catalog is a first pass. From here:
Descriptions get refined
Mediums get confirmed
Dimensions get added
Stronger pieces get extra attention
Most importantly, the catalog is no longer a looming, unmanageable task.
It’s a living document.