Authored by Gabriel Deupree | Enterprise Account Executive – Summit at Copado
Close your eyes and imagine a world where your business objectives line up with your technical requirements and resources. A world free of release days and the stress that come with them. A world where you can deploy on demand without the fear of backlogs and features becoming shelfware. That may be a pipedream to most but it’s possible when you incorporate DevOps into your Salesforce development and delivery processes. Let’s take a look at how you can get there.
In the world of Salesforce DevOps, changesets are a disaster and sandbox refreshes are the option of last resort. The new school of thought centers around collaboration to break down silos between teams, and automation of processes to eliminate manual work that slows things down and costs your organization money. Getting down to brass tacks, to solve this problem you need to focus on the following so you can work smarter, not harder:
Source Based Development
The year is 2020 so get yourself Git!!! Whether you’re doing Org-based development or Salesforce package-based development, you need to be tracking your work in version control. The best way of controlling Salesforce environments and packages using version control is called source-based development. Yes org based development is helpful in managing complex salesforce orgs, yes package based development helps organize your unpackaged metadata in your production org into well-defined packages, but neither can be managed reliably without using version control.
You might have a blend of org and package development, but if your development isn’t source based you’re burdening your Salesforce investment with lots of unnecessary stress. By adopting source based development you get unparalleled visibility into the history of your metadata changes and can organize changes into user stories that represent a single feature or customization. You may think it’s too difficult to adopt because a portion of your developers will have to learn Git, but thankfully there are tools out there like Copado that can automate Git actions in the background so you really have no excuse not to move to source based development, now do you?
Trunk based development (short-lived branches only), feature branching or something else? While every scenario is different, you’re going to want to use the simplest branching strategy that can accommodate your team’s needs. Trunk-based development is generally the fastest and best method, but when you’re doing org-based development on Salesforce, life isn’t quite that simple.
Salesforce orgs are long-lived, so each org on the path to production deserves its own branch that represents its current state. And to accommodate moving some features to production without others, you may need to make use of feature branches.
If you use feature branches, try to make them as short-lived as possible so you can mix changes between developers rather than keeping their work isolated. Again, tools designed specifically for the Salesforce development lifecycle can manage much of this complexity for you, and accommodate things like sandbox refreshes to keep work in sync (we’ll dig into this challenge in the next section). With that said, your branching strategy is important because it’s the foundation for CI/CD, and can optimize or hinder your productivity. Branches allow for parallel development and bring structure and standardization to your releases, but you need to keep your branching simple to enable incremental changes to be made on demand.
Raise your hand if you love sandbox refreshes! Don’t worry, we can wait… The fact of the matter is Sandbox refreshes are a nightmare and a significant number of you reading this article aren’t frequently refreshing your sandboxes because you’re afraid of losing your work in progress. It’s a valid argument against sandbox refreshes but when things aren’t in sync, you’re going to experience merge conflicts. Untangling merge conflicts can be its own separate nightmare along with all the time wasted by your developers duplicating effort on the same component.
The way to make sandbox refreshes a thing of the past is to follow best practices around user stories by packaging all your requirements together into a user story and tracking it across the sandbox hierarchy. This model allows you to quickly and easily identify what sandbox needs a copy of that user story and then promote it backwards (or back promotion) to get that lower environment back in sync with production. With a back promotion you’re now able to minimize differences among orgs from a metadata perspective; it’s not easy, but fortunately there are DevOps tools that can automate everything for you.
If you’re using Salesforce CPQ, Cloudsense, nCino or any of the numerous Salesforce managed packages that depend on configuration data, you’ll understand the risk, complexity, and inefficiency of trying to deploy between environments. Similarly, if you need to deploy specific test data from production to lower sandboxes for testing, it’s anything but a walk in the park. Your applications are dependent on data so the old adage of “garbage in, garbage out” is highly relevant here. The data can’t just be accurate, it needs to be current and transparent, otherwise you’ll inevitably run into errors and things will break.
The problem with migrating reference data from one org to another is mapping the dependencies of a root object and connecting it to it’s related objects. It’s tedious, time consuming and prone to errors when done manually but that’s only one part of the data deployment puzzle. The other part is the data seeding for testing. A specific challenge with data seeding for testing is that developers work in developer sandboxes (or at least they should if they follow best practices) but the production data your applications need for testing isn’t included in sandbox refreshes. There are numerous ways to get your metadata out of production but retrieving and deploying data while maintaining the integrity of its dependencies and relationships is very hard. You’re also going to need to ensure the data’s accuracy and mask it’s PII but that adds a whole ‘nother layer of complexity.
You might have built your own tool or maybe you’re even using CSV files to feed into your applications using the Salesforce CLI or Data Loader, but without a completely automated tool that deploys relational data from org to org and on-demand you’re at a serious innovation disadvantage. Granted, not every application relies on complex relational data for configuration, but every application needs test data. Think about how to provide long term support for developers and testers that won’t compromise your compliance standards (GDPR, HIPPA, etc.). Know your limitations because it’s not cost effective to build everything yourself. Strategic partners that specialize in these challenges are going to save you time, money and a lot of headaches. Find yourself a partner that you can work with for the foreseeable future, one that standardizes and templatizes your data deployments while solving both data seeding and migration of reference data.
Stay tuned for Part 2
Enterprise Account Executive – Summit at Copado
Gabriel Deupree graduated from Washington State University in 2011 with a degree in Business Finance and emphasis on Portfolio Management. He’s a charismatic individual that started his business career in software sales because it presented the opportunity to foster his love for technology and collaborate with people of diverse mindsets and backgrounds. Gabriel has demonstrated an uncanny work ethic that has propelled him into sales leadership roles and now he’s working to bring DevOps enlightenment to the Salesforce community.