A Salesforce developer’s pair programming experience during the pandemic

Pair programming in the software industry has varying acceptance. The answer to whether developers should do more pair programming is normally opinion based. The recent experience I had with my colleague Simon produced some promising results. We had a few productive pair sessions in two weeks developing a payments feature in our Claims product. After the first session we could certainly feel the benefits so decided to do some more. The COVID-19 pandemic sounds like it would impact the pair-programming as it is not possible for the two developers to sit at one machine. But there is nothing to stop remote pairings. It could turn out to be the other way around for some people by improving social connections.

Broadly speaking, we started with some sort of “pair development” mindset as the development of a user story requires not only coding, but many other tasks. We analysed the data model changes, discussed the system behaviour related to each new object/field, drew UML diagrams, mocked up UI prototypes, and drafted documentations in the first couple of sessions. What we were doing there was to clarify ideas, discuss approaches, and come up with a concrete design both of us would feel comfortable to start the coding with.

Before each session, we agreed on how long this session was going to take and what problems we should focus on solving. When there was remaining time, we just went on to solve the next issue. Here are the tools we used:

  • VS Code and its SFDX plugins – the IDE
  • Live Share – a plugging in VS Code to share code (stopped using because it auto launches a diff view)
  • Slack for screen sharing

We mostly used the “Driver and Navigator” style. The Driver is the person who typed the code at the keyboard. The Navigator is in the observer who gives directions, shares thoughts, keeps an eye on the larger issues while the driver is typing. A unique benefit of remote pairings is that the navigator also has a machine and a keyboard so can go find quick answers to driver’s questions. How many times do we, as an individual programmer, have to pause the typing and go find the answer to a specific coding issue or search the code base to find what a class/method is doing? With such a remote pair, the driver can continue the driving smoothly leaving the blocking questions to the navigator and come back to solve them asynchronously.

Pair programming requires intensive focus.

The advantage of pair programming is its gripping immediacy: it is impossible to ignore the reviewer when he or she is sitting right next to you.

Jeff Atwood

One direct benefit of focus is that code turned out to be simpler and algorithms are implemented faster. Simon and I are both familiar with the payments module code base. We needed to implement an algorithm that does some complex date range partitioning logic. In this situation, we tried to follow the “driver and navigator” style for a while, but we were still stuck. As we discussed ideas, we found two different strategies of traversing a tree data structure (top-down or bottom-up) using a recursive method invocation. We spent 15 minutes writing pseudo-code independently. Each of us handled one strategy and then we regrouped to discuss both. In the end, surprisingly we even simplified the implementation by removing the need of the recursive process and the algorithm looked much simpler. It took us 2 hours. But it could have easily taken me the whole day if I worked independently as I imagined it.

Another benefit of the intensive focus is removing errors effectively on the go. The errors range from the compilation errors to the typical copy and paste errors, all the way through the business level edge case problems. In another session where I was the driver, after a two hour non-stop coding session, I successfully pushed all the code changes into the Salesforce server from my VS code IDE in one go, without a single compilation error. This saved time as saving code to server is a non-trivial time consuming factor when coding on the Salesforce platform.

Apart from many known benefits of pair programming such as knowledge sharing, keeping focus, code review on-the-go, etc., I believe teams should encourage more practices of pair development in the Salesforce platform considering some of the platform specific challenges like:

  • More declarative approaches (process builder, formulas, flow, etc.) to solve the problem directly or simplify some of the coding tasks.
  • Various limits and considerations – not just governor limits, but also restrictions and considerations (like the choose between Master-detail and Lookup, the choose between the before and the after triggers, execution order, etc.).
  • Bulkification, although almost embedded into every Salesforce developer’s blood, is sometimes still dismissed or mistakenly used, even by an experienced developer.
  • Platform specific security issues.
  • Configurations / Administrations. Things like profile permissions, layout changes, field-level securities, custom settings, etc. if get missed in a configuration or deployment, can cause the effect that nothing feels to be developed.

In a retrospective session, Simon and I reviewed the approach and its costs and benefits. The conclusion is that pair development is vital for collaborative teamwork, it produces high quality deliverables, and it is effective when tackling a large user story.

Melbourne, Australia.

A simple Apex trigger framework

Whether one should use an Apex trigger framework or not is probably worth a separate discussion since any abstraction is at a cost and the cost could outweigh the benefits of the framework prematurely introduced. For the case that little business logic needs to be managed in triggers, a general practice is to keep it simple stupid (KISS). However Apex trigger frameworks have still been discussed in many developer forums and Salesforce programming books, focusing on organising the code to deal with the more complex domain problems. Many of these frameworks/patterns focus too much on putting an abstraction over the combinations of the trigger stages (before and after) and the operation types (Insert, Update, Delete, Undelete). That normally results in lots of boilerplate code to maintain. Some frameworks/patterns, introducing several interfaces to implement, are not that inviting to even get started with. This post presents an Apex trigger framework (aka. trigger handler pattern) that aims to separate trigger concerns, reduce programmer errors, and improve the modularity while maintaining a simple style.

(The more meaningful compiled code can be found in this GitHub repo.)

There are these main concerns in Apex triggers:

  • Multiple triggers can be defined for the same object and their execution order is not guaranteed.
  • The before and after stages.
  • Trigger operations: isInsert, isUpdate, isDelete, isUndelete.
  • Individual trigger processes are often change-based, i.e. only executed on certain records that have some change.
  • Individual trigger processes may need to be switched on/off.
  • Trigger logic mostly deals with a domain problem so the core logic could be executed else where – such as Apex REST API or a batch job.

When multiple triggers are defined for the same object, code gets complex to debug as developers need to be aware of all its triggers. Since the execution order of the triggers is not guaranteed, multiple before triggers (or after triggers) are in contention with each other. This makes things worse. It’s a widely accepted pattern to have one trigger per object. Further to this, keeping triggers thin has the benefit of leveraging Apex classes to organise the trigger logic. The following code shows how an AccountTrigger is written in such a style. It simply delegates its work to the common TriggerHandler class.

trigger AccountTrigger on Account (before insert, before update, 
before delete, after insert, after update, after delete, after undelete) {
    TriggerHandler.handle(TriggerConfig.ACCOUNT_CONFIG);
}

By looking at the name of the classes, one can tell that only the ACCOUNT_CONFIG is specific to the AccountTrigger. Everything else is common in all triggers. One line per trigger per object looks neat. Note that the typical trigger stages, operations, and their corresponding context variables like Trigger.isBefore, Trigger.isInsert, Trigger.newMap are not concerned here at all.

It’s tempting to put an abstraction on the permutation of the stage factor (before and after) and the operation factor (insert, update, etc.). That would result in lots of boilerplate code (how often do you need to handle a beforeUndelete event?). Quite often, the same logic needs to be invoked in both isInsert and isUpdate operations, e.g. do something when a Status field is changed to “Approved” no matter if it is a new record with “Approved” status or is an existing record that has status changed to “Approved”. Rather, The before and after stages have their distinctive purposes. Developers often need to think about carefully if the new trigger logic should be put into the before or after trigger. Normally the logic should be in either stage, very unlikely in both. Therefore, separating the before and the after concerns is more useful to remove design errors. The TriggerHandler class is common in every trigger. It focuses on these two stages and leaves the handling of the operation type to each specific trigger operation. The code is shown as follows:

/**
 * The common trigger handler that is called by every Apex trigger.
 * Simply delegates the work to config's before and after operations.
 */
public with sharing class TriggerHandler {
    public static void handle(TriggerConfig config) {
        if (!config.isEnabled) return;
        
        if (Trigger.isBefore) {
            for (TriggerOp operation : config.beforeOps) {
                run(operation);
            }
        }
        
        if (Trigger.isAfter) {
            for (TriggerOp operation : config.afterOps) {
                run(operation);
            }
        }
    }
    
    private static void run(TriggerOp operation) {
        if (operation.isEnabled()) {
            SObject[] sobs = operation.filter();
            if (sobs.size() > 0) {
                operation.execute(sobs);
            }
        }
    }
}

Let’s have a look at the TriggerOp interface (“TriggerOperation” is already used by Salesforce). It represents an individual trigger operation that encapsulates some relatively independent business logic.

public interface TriggerOp {
    Boolean isEnabled();
    SObject[] filter();
    void execute(SObject[] sobs);
}

It is important to guard the execution of the logic by checking a condition – which is often the operation types such as Trigger.isInsert, Trigger.isUpdate. Here, the isEnabled() method, if needed, can also merge other flags to allow in-memory switches to turn on/off the operation or to link to a custom setting or a static resource. Another concern developers have is that not all records should be applied with the logic. Normally there should be a check to guard only the records that have a change. Thus the filter() method enforces developers to think about this aspect for if it got dismissed, it would result in some complex trigger recursive calls. If all records need to be processed, the implementation class can simply return all records in the Trigger.new list.

In terms of how the common TriggerHandler handles various different trigger operations, it is the TriggerConfig that addresses these common concerns:

  • The setting to enable/disable the trigger
  • The operations in relation to the before and after stages

The following is the TriggerConfig class that shows various different configurations for different object triggers. It statically instantiates many TriggerConfig objects, each of which is ready to be used in their own trigger.

/**
 * A singleton class that presents the configuration properties of the individual triggers.
 */
public inherited sharing class TriggerConfig {
    public Boolean isEnabled {get; set;}
    public TriggerOp[] beforeOps {get; private set;}
    public TriggerOp[] afterOps {get; private set;}
    
    public static final TriggerConfig ACCOUNT_CONFIG = new TriggerConfig(
        	new TriggerOp[] {new AccountTriggerOps.OperationA()},
        	new TriggerOp[] {new AccountTriggerOps.OperationB()});
    // Other object trigger config
    
    private TriggerConfig(TriggerOp[] beforeOps, TriggerOp[] afterOps) {
        this.isEnabled = true;
        this.beforeOps = beforeOps;
        this.afterOps = afterOps;
    }
}

The above code can be further tweaked to dynamically instantiate TriggerConfig records from a JSON static resource so as to further decouple from the individual TriggerOp implementations. See this GitHub repo for more details.

The AccountTriggerOps class is simply a superset of all TriggerOp(s) in relation to the Account, organised in a top-level class:

public with sharing class AccountTriggerOps {
    public class OperationA implements TriggerOperation {
        public Boolean isEnabled() {
            return Trigger.isInsert || Trigger.isUpdate;
        }
        
        public SObject[] filter() {
            return Trigger.new;
        }
        
        public void execute(Account[] accounts) {
            // validation logic
        }
    }

    public class OperationB implements TriggerOp {
        public Boolean isEnabled() {
            return Trigger.isUpdate;
        }
        
        public Account[] filter() {
            Account[] result = new Account[] {};
            for (Account newAccount : (Account[]) Trigger.new) {
                Account oldAccount = (Account) Trigger.oldMap.get(newAccount.Id);
                if (oldAccount.Status__c != 'Active' && newAccount.Status__c == 'Active')  {
                    result.add(newAccount);
                }
            }
            return result;
        }

        public void execute(Account[] changedAccounts) {
            Set<Id> statusChangedIds = new Set<Id>();
            for (Account acc : changedAccounts) {
                statusChangedIds.add(acc.Id);
            }
            new AccountChangeStatusBatchable(accountIds).run();
        }
    }

    public class OperationC implements TriggerOperation {
        ......
    }

    public class OperationD implements TriggerOperation {
        ......
    }
    
}

The context variables (such as Trigger.old, Trigger.newMap) are only referenced directly in each TriggerOp as only each individual trigger operation knows which condition (Trigger.isInsert, Trigger.isUpdate, etc.) the logic should be executed. This decides which trigger context variables to use.

This framework, if adopted in a managed package, has the potential to open for extension, i.e. having the TriggerOp defined as a global interface. Then individual TriggerOp implementation classes can be specified in a static resource for each TriggerConfig. In theory, the custom code within an org that installs the managed package can hook their own trigger operations into the managed package’s trigger execution order by specifying the individual TriggerOp(s) to run in the a static resource.

In summary, this Apex trigger framework provides these benefits:

  • Allowing each trigger to be individually switched on/off.
  • Allowing each trigger operation to be individually switched on/off.
  • Promoting consideration of the before and after stages where the logic should belong to.
  • Promoting consideration of the changed records that need to be processed.
  • Increased modularity on managing the code.
  • Simple to use (well, subject to the definition of “simple”).

Namespace prefix issues with SObject fields map within managed packages

For lots of the cases, we need to find all fields of an SObject (such as Contact) so we do this:

Map<String, SObjectField> CONTACT_FIELDS_MAP = Schema.SObjectType.Contact.fields.getMap();

This returns a map with each key being a field name and value being the corresponding SObjectField. Keys have all lower case characters. Confusion comes in on the keys when this code is executed from within a managed package installed in a customer’s org. An often asked question is whether the key contains namespace prefix (of the managed package) or not. My latest testing shows starting from API v34.0 onward, the map contains both managed package custom fields (with namespace prefix) and local custom fields (without namespace prefix). Prior to this API version, the map only contains fields without namespace prefix so if the local field happens to have the same API name as a managed package field’s, it is overridden by the managed package field. I did the following test to confirm differences between API versions.

In a managed package with namespace prefix, create a simple Apex class with API version set to v33.0:

public class NsTest {
    private static final MapString, SObjectField> CONTACT_FIELDS_MAP = Schema.SObjectType.Contact.fields.getMap();
    
    public static void test() {
        Map<String, SObjectField> m = DOCUMENT_FIELDS_MAP;
        System.debug('>>> m: ' + m);
    }
}

Execute NsTest.test() in the developer console and the debug log shows the map contains keys without namespace prefix. Change the class’s API version to v34.0 and re-run the script. The debug log shows the map contains keys with namespace prefix (for custom fields).

Salesforce Certified Platform Developer I

After more than 5 years of fiddling around with Apex classes and Visualforce components and focusing on general coding principles, I thought it might be good to learn some of broader Salesforce features that could be easily overlooked by developers. So I took this exam over the weekend: Salesforce Certified Platform Developer I. It was a happy result: PASS. It did not tell me what score I achieved though. Here I just want to list some “new features/points” I discovered during the prep time the week before the exam. Some of these features have existed for years. It is just that I never paid attention to them.

  • Schema Builder. I cannot remember how many times I opened different browser tabs for different SObject definition pages to find relevant fields’ API names, types, picklist values and lookup relationship to other objects. This Schema builder, jut at Setup | App Setup | Schema Builder, is such a powerful tool to do all of those in one place. Moreover, you can add, edit and delete fields and objects by simple drag-and-drops; not to mention there is a “quick find” box to search for things. When customers want the schema of your product’s data model, just mention this to them. Trailhead is at here.
  • Contacts to Multiple Accounts. The Account lookup on Contact usually means the company the contact is most closely associated with. But contacts might work with more than one company. A business owner might own more than one company or a consultant might work on behalf of multiple organizations. Any other accounts associated with the contact represent indirect relationships. The Related Contacts list lets you view current and past relationships. The Salesforce object is AccountContactRole.
  • Quick Deployment. This is a deployment mechanism that rolls out your customizations to production faster by running tests as part of validations and skipping tests in your deployments. This is also useful to prepare a deployment simulation to production before the real deployment happens. Both change set and Ant target for meta data migration support quick deployments. Trailhead is at here.
  • Change Set. I have known the concept for quite a while but have not used it until a recent customer deployment management chance. Have to say it is a tedious work – clicking buttons hundreds of times trying to add relevant components to the deployment. It is only agile when used together with quick deployment and when it deploys relatively small amount of work. It tracks all deployment histories.
  • Enforce CRUD and FLS. I had known that when rendering VisualForce pages, the platform will automatically enforce CRUD and FLS when the developer references SObjects and SObject fields directly in the VisualForce page. However, I have always forgotten to enforce the CRUD and field level security when Visualforce pages are referencing simple string properties that indirectly relate to some SObject fields. Expressions like these should be used more often in this case:
    Schema.sObjectType.Contact.fields.Phone.isAccessible()
    Schema.sObjectType.Contact.fields.Name.isUpdateable()
    
  • Process Builder. A few of the favorite questions the exam asked were about problem solving options. i.e. if we should use Salesforce declarative process automation features or Apex/Trigger code. Apart from formula fields and workflow rules, Salesforce have this strong declarative process automation feature – process builder. It can easily build a wizard by using this tool.
  • Triggers and order of execution. The developer guide is at here. The exam had more than a couple of questions in this area. It is important to remember when before trigger and after trigger get executed. And when there is any workflow rule being involved that can potentially update the same record so to recursively fire the triggers, this is the guide to understand the detail steps of the process.
  • Test Suite. If multiple test classes are selected in the “New Run” from the developer console, they are running concurrently so can sometimes hit the “Unable to unlock the row” error. The correct way is to create a suite of many tests and run the suite. The test classes in the suite are executed one by one sequentially. The suite is also useful in preparing the regression testing.
  • Lightning Components. It is nice to follow the trailhead to get some hands-on experience when learning Lighting. Even though it is still at an early stage and seems to be slow at the preview start-up, it is the modern way of coding the application – single page and JavaScript MVC.
  • Got to learn more standard Salesforce objects such as Opportunity and Lead. Surprisingly, Account to Opportunity is in a master-detail relationship but Account field on Opportunity is not mandatory. There were 3 questions in the exam in relation to Salesforce standard objects and their relationships.

Object alias in SOQL

The object alias used in SOQL can reduce the number of characters in query string and improve the readability. Suppose there are these objects with parent-child relationship:

  • Parent: ObjectA
  • Child: ObjectB
  • Grand Child: ObjectC

And all of these objects have three fields: Field1, Field2 and Field3. A normal SOQL statement that join these objects from the lowest level of the object graph looks like:

List&lt;ObjectC__c&gt; cList = [
        select
                Field1__c,
                Field2__c,
                Field3__c,
                ObjectB__r.Field1__c,
                ObjectB__r.Field2__c,
                ObjectB__r.Field3__c,
                ObjectB__r.ObjectA__r.Field1__c,
                ObjectB__r.ObjectA__r.Field2__c,
                ObjectB__r.ObjectA__r.Field3__c
        from ObjectC__c
];

The version with object alias looks like this:

List&lt;ObjectC__c&gt; cList = [
        select
                objC.Field1__c,
                objC.Field2__c,
                objC.Field3__c,
                objB.Field1__c,
                objB.Field2__c,
                objB.Field3__c,
                objA.Field1__c,
                objA.Field2__c,
                objA.Field3__c
        from ObjectC__c objC, objC.ObjectB__r objB, objB.ObjectA__r objA
];

Notice that all involved objects are specified in the “from” clause. This is really a sort of DRYness which makes the SOQL less verbose. It is particularly useful in the case that the number of characters is limited such as SOQL queries being used in HTTP GET URLs as part of the REST web service calls.

Count large number of records (more than 50,000)

In a Salesforce org that has more than 50,000 records of an object, the following simple count() query will still hit the Salesforce governor limit: “System.LimitException: Too many query rows: 50001”

System.debug('total: ' + [select count() from Contact]);

Even though it seems that the count function does not need to traverse the whole Contact table, it still does, for specific reasons like checking sharing settings on records so that different users may get different number of records. So is there a way of retrieving the total number of records that are more than 50,000. There are a few, each of which has their cons and pros. Surprisingly, for any of these methods, such simple task would need more coding than we expected.

Method 1: Use Visualforce page with readOnly attribute set to true

Controller class:

public class StatsController {
    public Integer numberOfContacts {
        get {
            if (numberOfContacts == null) {
                numberOfContacts = [select count() from Contact];
            }
            return numberOfContacts;
        }
        private set;
    }
}

Visualforce page:

<apex:page controller="StatsController" readOnly="true">
    <p>Number of Contacts: {!numberOfContacts}</p>
</apex:page>

Then you will be able to see the result when you access this page in the browser. Note that the readOnly attribute has to be set to “true”, otherwise you will still get a Visualforce error complaining “Too many query rows: 50001 “. However, you won’t be able to create a simple controller and a page in Production orgs.

Method 2: Batchable class

public class ContactBatchable implements Database.Batchable<sObject>, Database.Stateful {
    Integer total = 0;

    public Database.QueryLocator start(Database.BatchableContext BC){
        return Database.getQueryLocator('select Id from Contact');
    }

    public void execute(
            Database.BatchableContext BC,
            List<sObject> scope){
        total += scope.size();
    }

    public void finish(Database.BatchableContext BC){
        System.debug('total: ' + total);
    }
}

And then execute the following statement in developer console.

Database.executeBatch(new ContactBatchable(), 2000);

This is going to be an asynchronous apex job so it can take time. When it is finished, you will be able to see a log with category “Batch Apex” at the top. The debug statement prints info into that log. You will also see a few logs with category “SerialBatchApexRangeChunkHandler” which is the log for each batch.

In the ContactBatchable the start method has to use the query locator, so that it can retrieve the records up to 50 million. (after 50 million? Ask Salesforce support). If you use an iterable in the batch class, the governor limit for the total number of records retrieved by SOQL queries (50,000 for the moment) is still enforced.

If the start method of the batch class returns a QueryLocator, the optional scope parameter of Database.executeBatch can have a maximum value of 2,000. If set to a higher value, Salesforce chunks the records returned by the QueryLocator into smaller batches of up to 2,000 records.

Still, this method cannot be applied to production orgs as you won’t be able to create an Apex class in Production org.

Method 3: Make a http request using the REST API

By far, this seems to be the simplest way of achieving this and it can be used in production orgs. In developer console, run the following statements:

HttpRequest req = new HttpRequest();
req.setEndpoint('https://'+URL.getSalesforceBaseUrl().getHost()+'/services/data/v20.0/query/?q=SELECT+Id+from+Contact');
req.setMethod('GET');

string autho = 'Bearer '+ userInfo.getsessionId();
req.setHeader('Authorization', autho);

Http http = new Http();
HTTPResponse res = http.send(req);
string response = res.getBody();
string total = response.substring(response.indexOf('totalSize":') + 11, response.indexOf(','));
system.debug('Total: '+ total);

You will need to add a remote site setting with URL set to your production org’s URL (say https://na11.salesforce.com) in the setup.

Custom sidebar: Pop up a modal dialog with content from static resource

Salesforce org’s sidebar can be customized by HTML so is often used for placing small components like a table or some hyperlinks to external web pages. Static resources can be used to host any type of files and easily referenced by Visualforce pages and apex classes. So this brings the idea of writing web widgets in pure HTML, CSS and JavaScript and popping them up from the org’s sidebar links. Here is an example and the HowTo:

We developed a utility tool in HTML5, CSS and JavaScript. This tool does not depend on any Salesforce technologies or resources. But we want to incorporate this tool in our Salesforce managed package so that any orgs installing our package can choose to use the tool. Since it is a utility tool, a good place to host it is sidebar component. Let’s call this tool AbcTool.

The AbcTool only has three source files: abc-tool.html, abc-tool.js, abc-tool.css. These files are  packaged in a zip file which is then uploaded as a Salesforce static resource called WebWidgets. The following code shows how to display the AbcTool in a dialog. The code is in the home page component’s custom HTML area. It uses JQueryUI to create a dialog and binds it to the onClick event of “Abc Tool” link.

<link rel="stylesheet" type="text/css"
	href="http://ajax.googleapis.com/ajax/libs/jqueryui/1.10.1/themes/base/minified/jquery-ui.min.css">
<script type="text/javascript"
	src="http://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script>
<script
	src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.10.3/jquery-ui.min.js"></script>
<table>
	<tbody>
		<tr>
			<td><a href="javascript:void(0)" id="abcTool">ABC
			Tool</a></td>
		</tr>
	</tbody>
</table>
<script type="text/javascript">
$j = jQuery.noConflict();
$j(document).ready(function() {
	var iframe_url = '/apex/cve__AbcTool';
	$j('#abcTool').click(function() {
		var j$modalDialog = $j('<div></div>')
			.html('<iframe id="iframeContentId" src="' + iframe_url + '" frameborder="0" height="100%" width="100%" marginheight="0" marginwidth="0" scrolling="no"/>')
			.dialog({
				autoOpen: false,
				title: 'ABC Tool',
				resizable: false,
				width: 550,
				height: 450,
				autoResize: true,
				modal: true,
				draggable: false});
		j$modalDialog.dialog('open');
	});
});
</script>

The actual content of the dialog is in Visualforce page AbcTool.page as you can see it is referenced by “/apex/cve__AbcTool”. This is the Visualforce page that bridges the sidebar component and static resource which is a zip file. It just redirects the request to the HTML page in the static resource. The following is the AbcTool page:

<apex:page showHeader="false"
    sidebar="false"
    standardStylesheets="false"
    action="{!URLFOR($Resource.WebWidgets, '/abc-tool.html')}" />

Gotcha: convertTimezone() must be used in SOQL Date functions dealing with Datetime

SOQL Date functions are pretty useful for grouping or filtering data by date fields. With a proper Date function used in the SOQL, the code can potentially limit the query result records a lot. e.g. Query all Tasks that are created today:

List<Task> = [
        select Id, WhatId, Subject, CreatedDate
        from Task
        where DAY_ONLY(CreatedDate) = :Date.today()];
The above code looks neat enough although the function DAY_ONLY is not that obviously named. The documentation states: "Returns a date representing the day portion of a dateTime field." so it should be safe enough. I used it in a few places and it worked very well. However recently I got a failing unit test while I was creating a managed package. The unit test was testing the logic that uses the above code. The test only fails when it runs after 5pm GMT-7 and before 12am GMT-7. The local time zone of the org is GMT-7.

I started debugging the issue in that precious 7-hour window period. I created a sample Task after 5pm GMT-7 and ran the above code immediately and it returned no records! I changed “Date.today()” to “Date.today() + 1” in the where clause, re-ran the code and it returned the Task I just created. What? Is this saying “Get me all Tasks created tomorrow”? Obviously not, the only explanation on this is that the DAY_ONLY() function takes Datetime parameter as GMT time. Any time between 5pm GMT-7 and 12am GMT-7 is already tomorrow’s time GMT.

All of a sudden I realized that all those Date functions like CALENDAR_YEAR, DAY_IN_MONTH etc. could be useless as they are all having the same timezone issue. That cannot be right. I got back to the documentation and found it has this statement:

“SOQL queries in a client application return dateTime field values as Coordinated Universal Time (UTC) values. To convert dateTime field values to your default time zone, see Converting Time Zones in Date Functions.”

On the “Converting Time Zones in Date Functions” web page, it says “You can use convertTimezone() in a date function to convert dateTime fields to the user’s time zone.”. What do you mean by “You CAN…”? In an org with a different timezone other than GMT, you HAVE to use this method to make it working! This is obviously a compromised solution of fixing the original timezone issue in those Date functions.

Generally speaking, the timezone issue is introduced because in Apex, Datetime and Date are not clearly differentiated and Salesforce badly handles these two types. See some other Datetime issues in this blog post: Danger: Date value can be assigned to a Datetime variable.

Anyway, the fix of the above code is:

List<Task> = [
        select Id, WhatId, Subject, CreatedDate
        from Task
        where DAY_ONLY(convertTimezone(CreatedDate)) = :Date.today()];