Measure Twice, Cut Once: How To Standardize a Salesforce NPSP Account Model

A Case Study: Who Did We Help?

We were asked to help a Canadian nonprofit group (approx 80 employees) with their Account Model standardization. One important part of any Salesforce CRM implementation is the Account Model – which defines the way that people (contacts) link to companies (accounts) in the system. This relating of objects has its source in the “Business to Business” focus of the Salesforce product story. This Account Model can vary system to system (there are 3 primary models), and sometimes within the same system. Often an organization running their business on Salesforce will want (or need) to standardize this model across all their data/records. Optimally, the organization does this standardization pre-emptively, enabling better data quality and system usability, however, this effort is more typically a reaction to problems with applications and or integrations and sometimes simply required by new vendor products. This client had purchased an engagement platform app called StepUp which they were told required the newest account model; this requirement was the trigger event towards account model standardization, and thus begins our story. The client had also been warned (for some time) that the old model caused reporting to be slow and sometimes time out; more evidence that a standardization was a good idea.

This client’s Salesforce ORG (approx. 5 years old) had 800k+ contacts, 5 integrations, 27 packages, 58 triggers and many workflows. The ORG was a production instance sending a very large number of weekly emails via one of their integrations, so business continuity impact was a key concern.

Before We Get Going, A Bit More About NPSP

The Salesforce Nonprofit Success Pack (NPSP) is an open source set of managed packages for Salesforce, specifically oriented to help non-profit organizations track donors, clients, volunteers and others.  It has been developed over many years and has a strong developer community dedicated to it. The NPSP has 3 account models that it supports: 1:1 accounts (each individual has one account: themselves,) the “bucket” account model, in which individuals are connected to a bucket, like “individuals.” The third, and optimum account model is the household model, where individuals are related to households.

How to Make This Model Standardization Successful

There are two ways to accomplish this account model shift. One option is an automated tool which will go through the data (just the data..) and make the account model change for you. Please review the web content for this tool very carefully, and heed the warning “The changes you make with the conversion utility are irreversible changes.”. The other option is to do the shift “manually,” by working through the necessary changes object by object. Open-Issue looked carefully at the risks involved in this process, and opted to do the manual process.

A Quick Note On Risks
Risks for this client meant that inbound lead generation and email acquisition, or outbound emails could stop. Additionally, more than 20 of their development and program folks would have had significant interruptions to their day-today work, not to mention the related potential for anxiety and panic! Given the number of integrations, users, packages and custom APEX, we opted for the manual route… its not just because we like difficult stuff you know…

First, the team needs to have a project manager who can not only keep the project on track, but also is able to communicate with many different stakeholders and vendors before the project even really starts (see more about risks below). Also, having both a data analyst and systems analyst on the team is critical – both sets of expertise is necessary for a project of this sort to succeed. When integrations are present, the interplay between Salesforce, NPSP package automation, the data model(s) and the integrated systems is much more complex and requires staged testing to completely mitigate potential for outages.

There are three places where an account model shift of this sort will have important implications. First, there are packages that the organization has installed. Some of those packages may, or may not be affected by the account model shift. Second, there are workflows and triggers, and again, some of these may or may not be affected. Last, but not at all least, are integrations that the Salesforce ORG has with other applications. In this case, the org was integrated with Marketo, WordPress, and Drupal, not to mention the new integration with the Percolator product waiting at the door..

  • Detailed Workflow, Package and Integration Audit – Measurement of Complexity

The first thing Open-Issue took on was to measure the complexity of the project. We combine a set of assessment best-practices to obtain a “MOC” – or – Measure of Complexity, which considers all aspects of the effort including human factors, technology, roll-back plans, external events and constraints, and a combination of internal and ecosystem precedent. This project (on our scale of 1-5, 5=most complex) hit right around 3.2. Anything over 3 forces a higher level of prep/test. This initial assessment let us properly resource and manage timeline expectations for all parties involved.

The bulk of the project fell into two key areas of effort; first, the audits (packages, workflow, triggers, integrations) and second, coordination with Marketo technical staff on T5 cut-over sequence of events and integration sync planning. We always create a technical workbook ( We love Google Apps!) for our projects and have linked a redacted version of this project’s workbook here. A significant part of our consulting work over the years has met with an all-too-frequent dead end called “we don’t know”. This project, and this type of transition in general, will deliver you those responses – from vendors and from your own team; be ready, be brave!

  • Detailed Sequence – What is Our Step By Step Process for this Org?

Any successful CRM (or really, technology project) will have a planned sequence of events. There are always some things you need to make sure you do before or after others. And documenting and reviewing closely two levels of sequences (Seq 1 = High level, Seq 2 = low level) is a critical step for this type of multi-variable, multi-vendor transition project. The link above to our Project Workbook will get you access to our Seq 1, let us know if you’d like the Seq 2 document… its a lot more detailed! You can dive into the timeline here.

  • Sandbox Testing – More Risk Analysis

We had to enable the Percolator people to test their app against the large-row-count ORG of this client, while also proceeding along our audit steps, so we decided on 2 sandbox environments for the project (one Dev, one Full – which is a paid service of Salesforce). By having 2 sandbox environments, one engineer could proceed to test the sequence of data changes at the same time as the vendor could test app performance (this done in the Full sandbox), while the second engineer could do package, trigger, workflow and integration testing in the other (this in the Dev sandbox). Setup of these environments was key to coordinating the 5 groups involved in this transition.

Summary – Front-Loaded Process

This project took almost 125 hours – and much of that work was in analyzing risk, and communication with vendors. It is the sort of project where if you know everything, you can do it quite quickly (less than 20 hours) – but you never know everything! Because of the complexity of how objects, packages, triggers, workflows and integrations intersect and interact, understanding the effect of such a fundamental change as an account model shift is necessary before the shift is actually done. And that takes time, and attention to detail. Measure that thing twice before you cut!

How To Carry All Those Nuts

My work in data system consolidation is very rewarding; I greatly enjoy the challenges that our clients bring us, and love working with my team at OpenIssue to solve problems for those clients.

A few years back we established a pro bono practice and a series of low cost “kick start” conversion services (using a fixed fee model) and have treated those projects as some of our most important work because they are most constrained, and often the clients are most in need. The ones we love are referral, and fast. The ones we fear start with an RFP. We understand an RFP when its policy driven, but that is not often the case in the small nonprofit world.

We recently reviewed an 18 page detailed specification which described a very complex tool set and information system likely to cost at least $40k to implement and another $10k/year to maintain (at least!). The 2 hours I spent with the prospective client went into the document, and helping them understand their document’s content and its translation into real-world software options.

What they really needed instead was a $5k, two week “quick start”.

We understand (and see quite often) the incentive for a small nonprofit to write a big spec to allow for catching as much as possible (from funders, from software, etc). Low or no-cost systems like Salesforce™ allow groups to move quickly into new systems without the former complexities of server and installation. With the ease of communication with other users and ecosystems of advisors and consultants, why are these groups not simply taking the free stuff? Vendors (like us) will quote (and build) the world for a client – if that is what the client asks for! So is the RFP helping then?

Its tough to get folks to:

  • See the value in simplicity and speed of transition with SaaS tools
  • Realize data conversion is complex and should be done by experts

In search of council, I turned to a friend (we’ll call her “S”) and client who was familiar with the RFP process and the prospect’s situation, and I asked “How can we help these folks go after more realistic, and sensible goals?” The reply we got was:

Watching her the and struggle with figuring out systems, reminds me how much time is wasted, and stress is caused for people trying to do awesome work. And how regularly non-profit leaders have to make game changing decisions about stuff they don’t have capacity to understand.

Keep doing what you are doing – providing authentic, nonpartisan (ha!), trustworthy leadership in this space. I’ll chew on how to help the little orgs better, but I think you are already well down the path.

This is officially a “favorite” email ever. Thank you again “S”, and to all our great clients and ecosystem partners and allies.

Here is the prescription I wish I could write for all nonprofits and higher ed groups:

  • Consider an advisor who has made this type of transition a few times. If we can’t help you – we can find you a few folks who can.
  • Lean on incumbent tools if they can be improved – as an alternative to switching.
  • Be honest about your operational needs from a software system (rolodex, lists, email, donations).
  • Consider the Agile/Iterative model – some of these people have good advice on that:
    • Guy Kawasaki  -Iterations and fast/frequent work product to garner feedback is more valuable than the “perfect product” (which never appears…)
    • Eric Ries  – The idea of the “Lean Startup” and “Validated Learning” further supports iterations, and small steps in the right direction – not plans that map the whole journey.
    • David Kelley, IDEO – Kelley believes that how quickly you create an initial prototype is directly proportional to how successful a product will be. Essentially, given a set project deadline, the earlier you invite feedback, the more chances you have to revise and improve. He calls this “enlightened trial and error.” (citation source: http://ecorner.stanford.edu/videos/686/Design-as-an-Iterative-Process)

Buy or Build? NetSuite Customization

These days, figuring out how to implement a system that you need for your business can be a challenging process. Do you buy something “off the shelf” which has most of what you need, or do you build it on top of a SaaS system such as Salesforce or Netsuite, which allows you to build a lot of custom functionality while leveraging a best-in-class CRM or ERP system?

We approach the decision by breaking things into two important groups, the “knowns” and the “unknowns.” The “knowns” are things to expect, including: a prioritized functionality list, an assessment of “canned” products already out there, an assessment of how much config a “canned” solution would need, the need to set time & money budgets (don’t forget consulting costs), and finally, allocation of internal resources for the effort.

The “unknowns” are  examples of the unexpected, including the fact that custom software build projects are a “new thing” for most teams. Finding the right consultants can be time consuming, and un-planned impacts can slow timelines and increase internal team resource load. Change management, adoption and training are always more difficult than expected, and it can be challenging to align business stakeholders during the project (again, whether built or bought).

Building a formal process document, or several, (typically at a very high level) with all key users can allow internal and external project team to align around a single vision which is an essential factor when designing software.

We recently designed and built a custom part number configurator for the East Coast manufacturing firm Equilibar, LLC, working with the operations team under David Reed. We started with getting an idea of what they wanted and quickly moved into the design/build discussion based on our review of their ERP system (NetSuite) strengths and weaknesses. We looked at one viable “buy” option, and assessed the gaps if we followed that path. The end result was to build instead of buy.

I recently spoke with David Reed about their decision making  process.

The native functionality of Netsuite would not allow us to efficiently build specific elements we needed for our pressure regulator manufacturing business. For years we had used work-arounds that included databases written externally in Microsoft Excel. This added complexity required us to use multiple pieces of software for any given task rather than using Netsuite as a true full function ERP package. To find a solution that would allow us to build part numbers, for example, within Netsuite, we looked at both market ready packages and custom written software. The ultimate deciding factor was the realization that off-the-shelf products were not going to eliminate the need for custom development. The great likelihood was that we would still need to locate and engage a custom software company to modify the off-the-shelf solution. That approach did not mitigate any of the risks usually associated with custom software development, and in fact may have created additional risks given the greater number of parties involved. If we were going to need to write custom software anyway, why not just write the software from scratch to meet our goals exactly? That is what led us to engage with OpenIssue to develop our tool within NetSuite.

In working with David and his team at Equilibar, our first step was to create a meta-sales process allowing the team to “see” where a customization could be needed. We established a rough baseline specification to start the process, and moved into it designing both functionality and day to day business process, keeping in mind that the new software functionality will need to work for the team, and may change as we build it. We implemented an Agile design/build approach, working around a base spec and the needs of the team, and within 5 weeks had a working prototype.

We believe the time spent on process design/adjusting then clear documentation is a great way to align tech, vendor and internal stakeholders around any project, buy or build. The process becomes a go-to document to help with scope creep, prioritization and escalations.

SaaS Services (rental software) afford businesses lower cost system with everywhere-and-always access (mobile, etc), and scalability. While these benefits are clear, they come at a cost: one size fits all. Most SaaS systems offer customizability as the fix for this cookie-cutter approach. Customization is a cost.

This paradigm shift to SaaS has been so profound, and yet there are still gaps for specific business needs. How then does one make a decision on Buy or Build?  A few questions to ask yourself:

  • Is building this customization a part of a long term vision?
  • Have you looked closely at where your current systems are inefficient?
  • Do you have recommendations for consulting teams from trusted advisors?
  • Can customizations benefit the majority of current system users?
  • Could customizations replace manual process or duplicate effort?
  • Are you having a hard time finding “baked goods” available in the market?

These questions will help you not only make a decision of whether to buildor buy, but will also guide you in the process of building a custom system, or configuring an out-of-the-box product.

We love helping clients identify options, and make smart system design/implementation decisions. Let us know if we can help you with a technical project. 

CRM Functionality Lists

16885470_mIn our consulting work, we are often pinned between a world of software functionality (and limits) and client functionality requirements (which are sometimes known ahead of the project start, and sometimes bumped into a project  mid-stream).

Over the years we have built a sort of “master list” of functionality, and have decided to setup a maintained page on this site to keep it available for reference. We will often go down this list in a client discussion to see how we rank their requirements. Sometimes potential clients and clients come to us with this list – and very often when they do it is specified at the level of the end user, not at a level that is helpful for building out a system.

We’d welcome your feedback on this list and its usefulness. We specifically use this list in our XRM Assessment reports and use our survey and data scan process to “guess” at the client’s prioritization sequence, typically igniting spirited discussions which ultimately lead clients to a more informed decision on systems, partners and approaches.

We especially like this post from Ian McAllister, on how to approach prioritization. It favors going higher up to themes (which could also be characterized as specific outcome goals, like donor retention,) then coming down to more granular levels of functionality within those themes.

“All models are wrong, but some are useful.” *

*  Quote from George E. P. Box

A few weeks ago I got a cold call on a Saturday morning (on my cell phone?!) from a guy in Rome (yeah, Italy) in a heck of a hurry to convert an Italian Monastery from Blackbaud’s Raiser’s Edge ™ to Salesforce ™.

He explained that a few volunteers had helped them with the initial conversion but that the data is not completely moved yet,  and it does not look right. I am writing this post to try and present this client (not an atypical one may I add) a sort of “way-out” of this corner we hear about quite often.

This conversation is typically led by these questions from the prospect:

  • Have you done this before?
  • How long does it take?
  • How much does it cost?

Our response is always the same (these days); that we have done hundreds of conversions to and from these systems, and that depending on the variables the costs would fit into the following range:

  • Time range of: 3 weeks to 14 weeks – the average is 8 weeks.
  • Price range of: $8k to $100k – the average is $20k.

This was not well received.  People in this situation want fast action and cannot understand (it seems) why this is going to be so difficult, or take so long. His primary objection was that if we had done so many of these, then why can’t we pinpoint the costs more accurately?

Cutting to the chase; try this model if you (or a loved one) is in this type of pickle:

Problem Story:

  • Small business moving from one fundraising system to another
  • Short timeline and small budget
  • Person driving the transition is neither the primary user nor a conversion specialist

Best Path Forward:

  • Invest in your incumbent system before jumping, like more/better training, advanced configuration, and possibly enhancements (send a note to our friend Bill Connors, and maybe a plane ticket too — Italy is nice for Westerners to visit ey?)
  • If that is not fruitful;
    • Spend a little money to get a vendor selector who knows the options. This will save you 3-5 weeks of calendar time, 20 or so hours of asking questions that are already well known in the ecosystem, and allow your business to run without interruptions (call our friend Robert Weiner please).
    • Allocate 8-10 weeks for the conversion.
    • Involve the primary user in the conversion process. Have that person drive the effort if possible — this will save you time and money!
    • When you shop for your new system, use examples that are real to test the fit with the new system, like:
      • Walk through adding 10 records – manually. If this is too much for you to do – then you will get surprises later that you will regret (see blog post on “Top 10”).
      • Use that manually entered data to view reports – live – with your sales rep – to get a feel for what you are buying (test drive ey?!)
      • Measure/Assess time and energy costs of this transition – talk it over with your team.
      • Be honest with management; if a new system is a way to avoid proper training and discipline – then expect a replay of this in 18-36 months.
      • Assume you will lose some data, time and possibly people in the change. Hold the project to a realistic pace and measure of success – many projects get bogged down by unimportant concerns; getting it done and on time is the priority!

In our experience, the “fast/urgent conversion” prospects are typically not being championed by system users – they are being led by cost cutting folks. That is going to make the whole thing much harder. The transition needs to have system users involved, or when you are “all done”, user adoption will be difficult.

I later found out that an ecosystem ally had given this prospect my cell phone about 3 weeks back, so it took this guy 3 weeks to hunt around, see that things cost too much, then call me on a Saturday morning to get it done in a rush.

This model (approach to a conversion) works well – and often slows things down; I suppose I am not going to get much traction on this post… 🙂

Layers of Data

ID: 2014-11-16.1 by tgroden Philosophy 0 Comments
Most people who use data on a daily basis just think of data as it relates to their application or software (such as “Contact or Leads”, or “Pipeline Data” or “Giving History”), but from the perspective of consultants consolidating or integrating systems, we have to break it down into patterns and flows, and work inside of (or sometimes establish for the first time) the database of record (DBoR) model to ensure data integrity and business continuity.

We have developed, and consistently leaned-on, an overly simplified lens through which to view all (yes, all/any, seriously) data in all systems; It is our “6 layer” model. It helps us compartmentalize all of the data in any project to allow for pattern identification and integration architecture approaches. Often our clients ask us: “can you glue this to that – then make it report on this”. This approach typically gets them 2 or more vendor products doing as well as they can – but often does not get the client a best of breed integration design. Best of breed does not need to mean expensive, either. More often than not we see out of the box systems (and their recommended approaches for implementation) as the most-expensive approach.

To help our clients consider (and create) their options, we break it all down into these 6 layers:



This 6 layer system is not meant to be a prescription for information consumption, it is meant to help data consolidation designers and teams like OpenIssue separate the “major flows” of data present in all information systems. We always accompany the 6 layer model with the ETA breakdown, as follows:

  • Entities
  • Transactions
  • Attributes

Again, this is not meant to tell anyone “about” their data or information, it is meant to enable consolidation and integration teams to handle the data with all the constraints software and integration tools will force upon clients. By breaking things into layers and types, you can sequence your project better, and also manage your prioritization and budgets.

These approaches and tools help us manage timelines, budgets, and bring options to our clients.

Have I mentioned we love data and transition management? Thanks for reading..


Technology Myths

A lot of the work I do with my colleagues at OpenIssue involves informing people about their technology options and different approaches to adopting these options. Through this process, we discover that clients often have certain “beliefs” or ideas about technology. 

Over the years, we have found the following short list of ideas are prevalent in the client (and consulting) ecosystem. When clients can be informed that these are, in fact, technology myths rather than truths, we can help them avoid putting time and money into no-win pathways:

Myth 1: You can have a single system.

There is no such thing as a single system in information processing architectures, especially if it is cloud-based. Many pieces come together to make a user experience possible. 

Our work calls on us to architect systems and the connections between them. While not all systems can work together well, in the circumstances they can, we favor improved communication over system consolidation. 

Picture a forest and all the things involved that make it function – the trees, the low ground cover in between, the squirrels that spread acorns so that more trees can grow, etc.

It’s important to pay special attention to this myth. When this line of thinking is present in a project dynamic, decision makers can be misdirected towards a false simplicity which is, unfortunately, often supported by unfulfilled vendor promises. This paves the way for becoming a “Beta Tester” (or even Alpha!), which can interrupt your operations and the work that your technology is intended to support. We like to balance customization and new features with business continuity and opportunity cost analysis.

Myth 2: Project requirements are complete.

Project and system requirements are never “finished” per se, nor can they ever be fully “correct”. They are always drafts until the team really dives into the project. At their best, project requirements provide a solid starting point for what ultimately becomes an evolving business negotiation, with a maturing understanding of the project’s goals and outcomes.

We always make room for both reality and change in the execution of a statement of work. It’s something like crossing a narrow rope bridge over a high ravine: I’ll tend to watch each step very closely, and never lose sight of the other side nor the way back.

Myth 3: Budgets and timelines can be fixed.

Timelines and budgets are critical for any project. However, most project timelines and budgets do not stand the test of external change. The best a project team can do is to both have a plan, and also expect it to change. 

Just like when you plan to take a long walk on what starts out to be a gorgeous day, it’s important to bring a rain jacket or layers, because the weather can easily fluctuate.

Flexibility and change management can be a difficult when consultants produce “not to exceed” (NTE) agreements in this dynamic environment. NTE agreements offer “fixed costs” and can wrest control out of your hands, and into the hands of the vendor if not implemented inside of an effective communication model.

Compromise should be the outcome. The key here is to remain actively involved in which compromises are made, when they are accepted, and how they are implemented.

At OpenIssue, we employ these and other mitigation strategies to manage this dynamic of changing timelines and budgets: weekly project check-ins, a well-defined escalation plan, and burn rate reporting. Both client and implementation teams need to be at the table and communicating regularly for this to go smoothly.

Human Resources Systems: Data Conversion Preparation

Some in one group, others in another..

Some in one group, others in another..

Data conversion preparation is a key step in making sure you have a successful transition to your new system. Preparing for a conversion of benefits data from different HR systems, such as converting from ADP to Workday, will introduce some entity layer keying challenges. Entity layer keying is the way in which individuals are identified as unique in a particular system. Often, especially when you have multiple data sources, individuals can be keyed differently. For instance John H. Doe in one system could be J.H. Doe in another, and Doe, John, H., in a third, with three different unique IDs.

With HR data, its all centered around an individual human. Create a way to identify everyone as a first order of business. This will save time for the data team, and for design and test folks too.

So the first step is to create an entity layer key.

Typically there are 4 types of “humans” in the system:

  • Employees (current, former, part time, etc)
  • Dependents
  • Beneficiaries
  • Trusts and other entity-proxy records

Keying this will allow for unique identification of the entities for later data juggling – and – will also force (early in the process) clear definition of variance from this list of 4 profiles – allowing for more complete target environment configuration on plans and eligibility.

The second step is to clean some of the source data

Dust stuff off a bit in the source database. Take these basic steps and a lot of the new SaaS based HR systems will like you better:

  • Complete all addresses; incomplete addressed, DOB, SSN.
  • Identify where zip code based coverage will be tripped up by decentralized pool addresses.
  • Identify election and effective date data hooks for all 4 type of humans.
  • Surface any “relationship” data hooks and any required variance therin.
  • Audit all free form text fields where a name is/can be entered and assess impacts.
  • Newer systems understand “Domestic Partner” better than older systems, look at the relationship data footprint in your system and surface it as you will hit conversion issues specific to that “relationship” type.

Adopt a 1% rule and bind to an escalation plan

Make sure you’re not spending too much time on too few records. Spending 3+ days on 12 entity records must be done only after its decided that the downstream effect of manually handling those 12 records is a higher cost.

Hire or assign a notes minion

HR conversions are complex and involve many steps, including often many separate stakeholder groups. This poses a challenge for data teams during test because problems to one group are often invisible to another. If you take notes, and actively manage root cause analysis methods, you can save everyone a lot of headaches as the following three key issues spin in the universe concurrently: 

  • Ongoing business (changes, adds, terminates, etc)
  • Plan and carrier conversions
  • Approaching open election dates

I think Abe Lincoln is famous for saying “Give me six hours to chop down a tree and I will spend the first four sharpening the axe.”. Nowhere in our experience could that planning benefit a conversion team more than with HR data.

How To Choose a Cloud System: Test Your Top 10

40549475_mThe goal of talking with a vendor about a technology solution is to gather information to help you make the right decision. How do these products meet, or differ from, your requirements; What will they cost to both implement and sustain; What will not be available and may need custom development; and so on.

So what is the best way to choose a cloud system? Why is it that people don’t test systems before buying them? Why would customers not take a few sample records from their existing environment and systems, and just walk through the simple steps (we hope) of entering them into the new system and having the whole crew say “Cool – lets buy it”.

Clients just don’t do this. We are sort of amazed at this. We have a methodology we’d like to introduce, called “Tiny Test Conversion” aka TTC. We have not seen the following published ever – so we are doing it, and welcome like minded folks to reach out to us to spread the news…

How to buy a technology system:

  • Form a decision group around the vision of the project. This should include people from all your primary information system areas (accounting, development, operations, program, hr).
  • Decision group creates a short list of the technology vendors seen as possible partners and contacts the companies, asking for the following:
    • We are a <# of employees> user group in <insert industry>, located in <city, state>.
    • You may want to give them your EIN, too, as that is how they rank/route you as a lead.
    • We have an annual operating budget of <$>.
    • We are considering a move from <incumbent tech>.
    • We would like a demo of your products and want to have an interactive experience where, in our first meeting, we can enter some actual data into the system during the demo.
    • Send them some version of this Functionality List — not too specific.
    • Giving them all this info gets you to the right sales rep right away. It also gives them all the information they need to know you are not messing around and need results — not just conversation.
    • Be prepared for them to ignore or avoid the “enter data live in the demo” – this is key.
  • Have the decision group compile a list of 10 constituents in your current systems who represent both complex examples of data needs, but also well-known information so you can walk into a demo knowing the source data/system – and enabling you to test the target system.
  • On demo day have a non-stakeholder record the following in a spreadsheet:
    • Vendor name – Sales Rep Name – Product Name
    • Action taken – Result
    • After the call – have a round table discussion with the decision group and review the Action/Result notes – and vote on if this system is perceived as capable of doing the trick.
    • Get into the demo and enter the data, and run the reports. See if the software actually work.
  • Make no commitments, ask for no next meeting – explain you have to have an internal discussion with your decision group then “we will reach back out to you in a few weeks”.

The key to this exercise is to get you into the product right away and not in power point or talking. Its software – so use it. This is seriously unpopular in the sales process – but it will save you a lot of time and money.

This early-in approach also will surface gaps in both product – but also your team’s understanding of what you need – and perhaps how to talk about it in an eventual requirement document.

Going through this will help you form a team, and will begin the creation of a prioritized functionality list. Moreover, it will educate you, and allow you to move faster in the decision-making process. There are a lot of moving parts out there and cloud computing is making the playing field a very crowded place.

If this sounds difficult to do, or understand, call us and we will help you get through it no charge. But be ready to bump into readiness issues on your part!

A little backstory on why we promote this approach:

We believe that the TTC will be well worth your time and effort investment. If a ultra-tiny version of this conversion does not go well, how well do you think a big expensive one will?

We believe clients are being sold systems, instead of buying solutions. I am not sure “sell” and “solution” can be in the same sentence. This, from the desk of the guy hired to put the systems into action for the clients…

Most stakeholder groups are neither tech selection experts nor conversion specialists. We see that projects in stage 3 (initial test conversion – requirements refinement) of our 5 stage methodology framework have surfaced gaps in their requirements and goals, both human and technical, and anxiety is a little high. At least 50% of the time somebody on the client project team says: “Hey, lets just take the top 10 <fill in the blank> records from the old system/s and have just those converted to the new system and see if we can do end to end testing on those?”.

The suggestion from the client allows them to take a known set of data (in the legacy system) and see it through the lens of the new system.

Our response — Why don’t they test the software that way before they buy it?

We’d also suggest reading our post on Technology Myths when you are comfortable with these as myths, as we are, you have a new confidence and realism around making a change to your tech.

Visualization Exercise:

Imagine you are blindfolded, you are walked by the hand into car dealership, and you are told “yes, the new car is beautiful, your significant other loves it, and it will fit all your passengers without a problem”. You sign the check, then later back at the ranch you try to fit the 6 kids into your two seater which your significant other loves. Nobody lied did they?

Our Acronym List


If you want to communicate effectively, it’s important to be aware of your habits. My engineering team and I often speak and write in acronyms (you know…TLAs) and it’s not good practice, at least not outside of a conversation between propeller heads. While they convey specific meaning to us, when used outside of our team, acronyms without definition can negate their potential to make our communication easier to understand.

With that in mind, we set up an official team TLA list and wanted to share it here. Some of these acronyms are truly inspired, others are obscure, to say the least. Send us an email with any comments or additions to this list and we will fold them into our secret language.