I have now migrated a library discovery layer twice in two years. At my previous institution I finished an EBSCO Discovery Service update just before leaving. Then I arrived at my current job, assessed the environment, and realized that the Primo instance I’d inherited was still running Classic — not Primo VE, not NDE, but the original, pre-Angular, pre-modern-stack Primo that Ex Libris has been nudging libraries away from for years. Nobody had upgraded it because, until I arrived, there was no dedicated systems administrator to do it.
So I skipped VE entirely and went straight to NDE. Here’s what that was like.
Some context on the Primo landscape
For readers outside the library systems world: Primo is Ex Libris’s discovery layer, the search interface that sits in front of an institution’s catalog, licensed databases, and other resources. Over the past several years Ex Libris has moved through three distinct generations of the product. Primo Classic was the original, a Java-based system with a customization model built around XML configuration and a front-end that predates the current era of JavaScript frameworks. Primo VE (View Editor) was a transitional version, introducing a new Angular-based front-end while retaining much of the underlying architecture. Primo NDE (New Discovery Experience) is the current generation, fully cloud-managed, with a different customization model and a significantly different operational relationship between the library and Ex Libris.
The typical migration path was Classic to VE, then eventually VE to NDE. My institution never made the first hop, which meant I was starting from further back than most, but also that I didn’t have to manage two transitions sequentially.
What skipping the middle version actually meant
The practical consequence of jumping directly to NDE was that I had no institutional muscle memory for how the new system worked. Libraries that went through VE at least had staff who had touched the Angular interface, who understood the new customization approach in outline even if they hadn’t gone deep. We had none of that. The Classic instance had been configured over years by people who were no longer at the institution, in ways that weren’t always documented, and the task was to understand what we had, decide what needed to survive the migration, and rebuild it in an environment that works completely differently.
In Classic, heavy customization happens through XML pipes, tiles, and template overrides. The system is extremely flexible and, correspondingly, extremely easy to turn into an unmaintainable tangle of local changes that nobody fully understands. NDE moves the customization model toward a CSS/JavaScript injection approach with a more constrained sandbox, which sounds like a step backward in flexibility but in practice tends to produce more maintainable configurations. It’s harder to shoot yourself in the foot. It’s also harder to do some of the things Classic made possible, and you spend a fair amount of time figuring out which category a given requirement falls into.
Discovery migrations are metadata migrations
One thing that both the EDS update and this migration reinforced: discovery layer projects are mostly metadata projects. The interface work gets the attention, and the interface work is genuinely important, but the bulk of the consequential decisions are about what you’re surfacing and how.
At the EDS institution, the main challenge was reconciling holdings data between the ILS and the discovery index, making sure that what EBSCO thought we had licensed matched what we actually had licensed, and that full-text links were resolving correctly. At my current institution, the Primo migration involved similar questions about the Central Discovery Index (CDI), Ex Libris’s shared metadata pool, and how our local holdings mapped onto it. When something isn’t findable after a migration, the problem is almost never the interface. It’s almost always a metadata mismatch somewhere in the pipeline.
This is why discovery migrations require someone who can think across layers. You need to understand the front-end well enough to configure and customize it, but you also need to understand the data flows underneath: how records get into the index, how availability information gets attached to them, how the link resolver connects a discovery result to full text. A migration that gets the interface right but leaves the metadata in a degraded state is a migration that users will experience as a regression, even if the new UI is objectively better.
Being the first dedicated sysadmin
There’s a particular kind of institutional archaeology that happens when you’re the first person with a dedicated systems role. The systems exist. They work, more or less. But the decisions that shaped them are embedded in the configuration rather than in any documentation, and the people who made those decisions may have left years ago.
Some of what I found in the Classic instance was intentional and worth preserving. Some of it was accumulated workarounds for problems that no longer existed. Some of it was configurations that had probably never worked correctly, quietly doing nothing, because nobody had the access or the context to remove them. Disentangling those categories takes time, and it means accepting that you will sometimes make the wrong call.
What it doesn’t mean is being paralyzed by the history. At some point you have to decide what the system should do for your users now, document your reasoning, and build toward that. The goal isn’t to perfectly reconstruct what was there before. The goal is a discovery layer that works reliably, that library staff can maintain without heroics, and that the next person to inherit it can understand without having to excavate.
What I’d tell someone starting this process
Two migrations in two years is a reasonable sample size for a few observations. First, budget more time for the metadata audit than you think you need and less time worrying about the UI. Users notice when things are missing; they adapt to interface changes faster than you expect. Second, get staff involved early, not just for buy-in, but because the people who answer reference questions know things about how patrons search that you will not discover from system logs alone. Third, document as you go, even imperfectly. A Markdown file with rough notes about why you made a particular configuration choice is worth more than a comprehensive document you plan to write after the migration is done and never actually write.
And fourth: if you’re the first dedicated sysadmin, accept that you will spend part of your first year on work that looks like maintenance but is actually foundation. You’re not just running the systems. You’re building the conditions under which the systems can be properly run. That’s a different thing, and it takes longer, but it’s worth it.