Monday, September 18, 2017

Environment Hub - One stop shop

It is always a nightmare to keep working with username / password / tokens with different dev orgs or sandboxes.

- Keeping eye on different username / passwords for dev orgs and sandboxes
- Giving Access to your sandbox / dev org to others
- White listing the IPs to avoid tokens being asked
- Sandbox Refresh - changing email and verification
- Scripts to reset passwords or profiles or emails, etc.

I believe Environment Hub is quite sleek solution.

Install Environment Hub Application


  • You will need to contact customer support to have that App installed
  • In case of production org (non ISV), it should be installed in production org
  • for ISV, it is more flexible, but prefer to be at same place as LMA org


Configuration


  • Select Environment Hub App
  • Add Environment Hub tab
  • In case of production org, all sandboxes should be auto discovered 
  • In case of ISV, we might want to register different Developer org to Environment hub
  • We should give all users who need to use Environment Hub, appropriate access to their profile
    • Manage Environment Hub
    • Connect Organization to Environment Hub


Sandboxes


  • Sandboxes are auto discovered by Environment Hub
  • We should enable the SSO on it
  • Once SSO is enabled, it is required to refresh this sandbox
  • Once that is done, any production user (with Connect Organization permission) will be able to login to that org!
  • No more email reset, password rest, white listing IP, ...



Development Orgs


  • We can connect any dev org to Environment Hub
  • We should enable SSO on it
  • Now there are 3 different method to map Environment Hub user to Dev org
    • User name mapping - we can map the user name from Env hub to Dev org - manually
    • Federation Id : in case of SSO, as long as federation id matches between dev org and env hub org
    • User name formula field - apply the user name formula field so that env hub user can be converted to one of the Dev org user
In most cases, there is only one user that we care about, hence I use third approach (formula field) to give all users in Environment hub access to dev org.

E.g. if dev org user is dev2@ot.com, I will make formula field to be "dev2@ot.com", hence all environment hub user will evaluate to "dev2@ot.com" and would have full access to my dev org.





Salesforce Big Objects

Was just experimenting with Salesforce Big Objects and found it quite interesting. It is mainly used for big data (100M+) analytics and mostly for asynchronous data crunching like Hadoop. However, there are very critical distinction before we go with Big Object.


  • Currently Big Object only support fields and permission, and that's about it
  • We can not have
    • triggers
    • page layouts
    • extensive SOQL (indexed SOQL is supported but that's extremely limited - and make sense as we are dealing with humongous data set)
    • no workflows, process builders, etc..
    • no reports
  • Basically it is completely non UI, and just for back end data stores for big data analytics - and that's about it.

Use case

In org, we can have survey on and Object record (e.g. Account, Opportunity, etc..), and would want to store those survey data in Big Object and later analyze it. 


How to user it :


1. Create Big Object

  • There is no User Interface to create the Big Object and its fields. We must use metadata API (using Ant Migration tool or workbench) to create such artifacts. Obviously, workbench makes it a lot easier.
  • Create object file 
  • Create permission set file
  • Create package.xml file
  • Nicely bundle them in .zip file and in right directory structure (you can download it from here)
  • Use workbench to deploy the .zip file and BigObject should like below
  • You can assign permission set (Survey_BigObject) to right user so they can query and update the data



  • Pay close attention to indexed fields, this is used for inserting records - identifying duplicates, and issuing synchronous SOQL queries. 

2. Insert Data

  • We can insert data just like we do in Apex for any other Object, or we can use Bulk API
  • There is no upsert operation, salesforce will automatically check the record being inserted against the indexed value, and if index values are same, then it will do update. otherwise insert.
  • Upon failure, there are no errors, we just need to look at saveResult or saveResults.

 Account a = [ select id, name from account limit 1 ];  
 Survey__b survey = new Survey__b();  
 survey.WhatID__c = a.id;  
 survey.WhatTime__c = System.today() + 1;  
 survey.WhatObject__c = 'Account';  
 survey.Question1__c = 'What is the rating';  
 survey.Answer1__c = '1';  
 Database.SaveResult saveResult = database.insertImmediate(survey);  
 System.debug( ' success ' + saveResult.isSuccess() + ' ' + saveResult );  
   



3. Query Data

  • Querying data is quite tricky with Big Object. Either you can query all records, which most probably is going to fail when we have millions of records
  • Or synchronous SOQL can only be issued against indexed fields. And also indexed fields must be in order in the query. See below for example:
 List<Survey__b> surveys = [ select id, WhatId__c, WhatObject__c, WhatTime__c, Question1__c, Answer1__c from Survey__b ];  
 for(Survey__b survey : surveys ) {  
   System.debug( survey );  
 }  
   
   
 System.debug(' -------------- indexed query -------------- ');  
 /** no gap is allowed and only indexed field in exact order can be used for query , we can skip but no gap is allowed, e.g. below  
 *  [select id from Survey__b] is fine  
 *  [select id from Survey__b where WhatID__c = :a.id ] is fine  
 *  [select id from Survey__b where WhatTime__c = :System.today() ] is NOT fine, as you can't jump to index2 without having index1 in the query.  
 * **/  
 Account a = [ select id, name from account limit 1 ];  
 List<Survey__b> surveys2 = [ select id, WhatId__c, WhatObject__c, WhatTime__c, Question1__c, Answer1__c from Survey__b where WhatID__c = :a.id and WhatTime__c = :System.today() ];  
 for(Survey__b survey : surveys2 ) {  
   System.debug( survey );  
 }  
   
   

4. Asynchronous SOQL


  • Asynchronous soql is only supported using Rest API
  • We have to provide asynchronous SOQL and then the custom Object to store the result
  • It seems like only one Async SOQL can run at any given time - at least in org I worked on

4.1 Create Custom Object To Store Async Result


  • Created suvey analysis object to store the analysis of the query with counts

4.2 Run Asynchronous SOQL

  • Below is how asynchronous SOQL looks like, we need to provide SOQL, and then target table and mapping between selected field and target table's field.

 {  
   "query": "select Question1__c, Answer1__c, count(whatId__c) c from Survey__B where WhatObject__c = 'Account' group by Question1__c, Answer1__c",  
   "operation": "insert",  
   "targetObject": "Survey_Analysis__c",  
   "targetFieldMap": {  
     "Question1__c": "Question1__c",  
     "Answer1__c": "Answer1__c",  
     "c":"Count__c"  
   }  
 }  



  • We can execute it using Rest API on Workbench


Once Asynchonous SOQL query job is completed, we can query Survey_Analysis__c  object for accumulated result.


Sunday, September 17, 2017

Salesforce Platform Cache

Nothing new, just short summary on platform cache :

To over simplify, Platform cache is glorified hash map. Platform cache is first divided into different partitions. These are hard partition, cache usage in one partition will not overflow in another partition. Usually different partition is used for different project.


Partitions are further divided into Org cache (available to all users) and Session cache (used for user session cache). These are again hard partition, cache from Org will not overflow to Session. Minimum cache size is 5MB for Org or Session Cache.




To store key in the Cache

1:  User u = [ select id, username from user where username = 'chintan_shah@abc.com' ];  
2:  Cache.Org.put( 'local.MyPartition1.key1', u );  
3:  System.debug(' key1 is stored ');  
4:    
5:  String name = 'Chintan';  
6:  Cache.OrgPartition MyPartition1 = Cache.Org.getPartition('local.MyPartition1');  
7:  MyPartition1.put('key2', name );  
8:  System.debug(' key2 is stored ');  

  • We can either put key directly using Cache.Org.put or get access to a specific partition using Cache.Org.getPartition
  • To work with Session partition, Org would just change to Session in above code.


To retrieve the key from Cache

1:  Object u = Cache.Org.get( 'local.MyPartition1.key1');  
2:  System.debug(' key1 is stored ' + u );  
3:    
4:  Cache.OrgPartition MyPartition1 = Cache.Org.getPartition('local.MyPartition1');  
5:  System.debug(' key2 is stored ' + MyPartition1.get('key2' ) );  




To clean the Cache


  • Cache is not guaranteed persistence store, it can be cleaned any time by Salesforce 
  • Can also be wiped out during code deployment
  • Session cache max TTL is 8 hours and Org is 24 hours
  • Internally it uses LRU when it hits size limit to clean up old data
Hence, we don't ever have to do the clean up, but if we need to for certain reason, we can use below code. It also has limitation if cache is stored using cache builder.

1:  for(String key : Cache.Org.getKeys() ) {  
2:    Cache.Org.remove(key);  
3:  }  
4:    
5:  for(String key : Cache.Session.getKeys() ) {  
6:    Cache.Session.remove(key);  
7:  }  



Cache Builder


Instead of storing and retrieving cache, it is better to provide loading strategy to Platform cache, so upon cache miss, Salesforce automatically calls the class to load the cache for that key. This reduces the code and handles cache miss much more gracefully.

We have to specify cache loading strategy as class. Below is small class which loads the user information based on username. Idea is username is being the key, we need to load user data if not already in the cache.


1:  /**  
2:   * Created by chshah on 9/14/2017.  
3:   */  
4:    
5:  public class UserInfoCache implements Cache.CacheBuilder {  
6:    
7:    public Object doLoad(String usernameKey) {  
8:      String username = keyToUserName(usernameKey);  
9:      System.debug(' UserInfoCache load usernameKey ' + usernameKey + ' userName ' + username );  
10:      User u = (User)[SELECT Id, firstName, LastName, IsActive, username FROM User WHERE username =: username];  
11:      return u;  
12:    }  
13:    
14:    public static String usernameToKey(String username) {  
15:      return username.replace('.','DOT').replace('@','ATRATE');  
16:    }  
17:    
18:    public static String keyToUserName(String key) {  
19:      return key.replace('DOT','.').replace('ATRATE','@');  
20:    }  
21:  }  


1:  String usernameKey = UserInfoCache.usernameToKey('chintan_shah@abc');  
2:  User u = (User) Cache.Org.get(UserInfoCache.class, 'local.MyPartition1.' + usernameKey );  
3:  System.debug( ' u ' + u );  


  • The reason for converting username to usernameKey, is special characters are not allowed in platform cache key. 
  • In line 2, we provide cache loading strategy, hence if key is not found, it will call our class to load the key.


Consideration


  • ISV (managed package) can supply their own cache, hence it will use different namespace than "local"
  • Cache put follows same transaction boundary as SOQL updates, so any rollback due to failure will not put data in cache
  • Cache TTL limits (8/24 hours), and plus limit on how much data we can store per transaction (usually 1MB)



Saturday, September 16, 2017

Bad @Future

When writing apex methods, it is very tempting to go for @future if we want some work to be done asynchronously in separate transaction. @future is definitely easy to use but comes with a lot of limitation.


 Let's say I want expose an API to all developers and want to do some work which is quite resource intensive, hence I breakdown my code and put it in @future which is going to take a bit long time and might need higher governor limit.

1:  /**  
2:   * Created by chshah on 9/15/2017.  
3:   */  
4:  public with sharing class MyCoolApex {  

5:    /**  
6:     * I plan to provide this method to rest of the developers to consume.  
7:     */  

8:    public static void myExposedMethod() {  
9:      // do task 1  
10:      // do task 2  
11:      // do task 3  
12:      // do future task  
13:      System.debug(LoggingLevel.INFO, 'Inside myExposedMethod - Calling future task for additional work.');  
14:      doFutureTask();  
15:    }  

16:    @future  
17:    private static void doFutureTask() {  
18:      System.debug(LoggingLevel.INFO, 'Doing Future Resoruce Intensive Task');  
19:    }  

20:  }  

Here  I am exposing MyCoolApex.myExposedMethod for other developers to consume, and calling doFutureTask to delay some resource intensive processing.

It works fine most of the time (until it doesn't), e.g. if someone calling :

 MyCoolApex.myExposedMethod();  

However the problem comes if someone is trying to call this method from Batch, Schedule or even Future. If our method is called from Batch process, Scheduled process or Future (which makes nested Future), salesforce will halt the processing with error. e.g. Client code :


 /**  
  * Created by chshah on 9/16/2017.  
  */  
 public with sharing class TestInBatch implements Database.Batchable<sObject> {  
   public Database.QueryLocator start(Database.BatchableContext BC) {  
     return Database.getQueryLocator('select id from user limit 1');  
   }  
   public void execute(Database.BatchableContext BC, List<sObject> accounts) {  
     MyCoolApex.myExposedMethod();  
   }  
   public void finish(Database.BatchableContext BC) {  
   }  
 }  


 TestInBatch tb = new TestInBatch();  
 Database.executeBatch(tb);  


This would result in error. In order to solve it, we could put some catches inside our API, e.g. below, but that doesn't really solve the problem.

1:    public static void myExposedMethod() {  
2:      // do task 1  
3:      // do task 2  
4:      // do task 3  
5:      // do future task  
6:      if( System.isBatch() || System.isFuture() || System.isScheduled() ) {  
7:        System.debug(LoggingLevel.INFO, 'Inside myExposedMethod - Unable to call future method.');  
8:      } else {  
9:        System.debug(LoggingLevel.INFO, 'Inside myExposedMethod - Calling future task for additional work.');  
10:        doFutureTask();  
11:      }  
12:    }  


Correct Solution (Queueable) 

The right way to solve the problem would be to use Queueable. Queueable has least amount of restrictions, you can call Queueable from Future, Batch, Schedulable, and even Queueable. In Dev org, we can nest Queueable upto 5 times and in Enterprise Org, there is no limit on nested Queueable.


1:  /**  
2:   * Created by chshah on 9/15/2017.  
3:   */  
4:  public with sharing class MyCoolApex {  
5:    /**  
6:     * I plan to provide this method to rest of the developers to consume.  
7:     */  
8:    public static void myExposedMethod() {  
9:      // do task 1  
10:      // do task 2  
11:      // do task 3  
12:      // do future task  
13:      System.debug(LoggingLevel.INFO, 'Calling Queuable');  
14:      System.enqueueJob( new MyCoolApexQueuable() );  
15:    }  
16:  }  

1:  /**  
2:   * Created by chshah on 9/16/2017.  
3:   */  
4:  public with sharing class MyCoolApexQueuable implements Queueable, Database.AllowsCallouts {  
5:    public void execute(QueueableContext context) {  
6:      doFutureTask();  
7:    }  
8:    private static void doFutureTask() {  
9:      System.debug(LoggingLevel.INFO, 'Doing Resoruce Intevensive Task');  
10:    }  
11:  }  


In above example, the actual API method (myExposedMethod),  just calls Queue to defer the resource intensive work in asynchronous fashion. Now, we don't have to worry who (or which context) our method is called.




Thursday, September 14, 2017

Ten Commandments for Salesforce Test

1. Thou shalt not use "Test.isRunning" in actual code
  • Use Mock e.g. Test.setMock(HttpCalloutMock.class, new MyMockHttpService())
  • There might be an extream case, but it is always best practice to keep actual code clean without Test.isRunning


2. Thou shalt use Test.start and stop
  • It resets governer limit!
  • In some cases it is necessary, e.g. if you require to do some DML to setup data and actual test code has callout
  • All asynchronous code gets executed right away - and we never know which code will have asynchronous code now or in future


3. Thou shalt use Test runAs whenever possible
        It makes sure code is running fine as expected profile.

 Profile p = [SELECT Id FROM Profile WHERE Name='Standard User'];   
      User u = new User(Alias = 'standt', Email='standarduser@testorg.com', EmailEncodingKey='UTF-8', LastName='Testing', LanguageLocaleKey='en_US', LocaleSidKey='en_US', ProfileId = p.Id, TimeZoneSidKey='America/Los_Angeles', UserName='standarduser@testorg.com');  
      System.runAs(u) {  
           System.debug('Current User: ' + UserInfo.getUserName());  
           System.debug('Current Profile: ' + UserInfo.getProfileId());   
      }       


4. Thou shalt use System.assert - after Test.stopTest, and should have meaningful message

  • Must use System.assert
  • It should always be after Test.stopTest, as asynchronous code gets executed on Test.stopTest line
  • Have a meaningful error message : System.assertEquals( A, B, "Message");


5. Thou shalt use @testSetup
  • Only one method can have @testSetup
  • It is called in separate transaction, so don't try to set variable, just use for data prepeartion


6. Thou shalt use private for class
  • class should be marked @isTest and private 


7. Thou shalt use private for method
  • Method should be marked @isTest and private


8. Thou shalt use Test Data Factory
  • Instead of creating test data in class, it should be seperate routine - as chances are same data would be created again.
  • We can use  Test.loadData(Account.sObjectType, 'myResource')  or create based on parameters, but it is good to externalize


9. Thou shalt never use @seeAllData
  • obviously!


10. Test Behavior over Coverage



Based on above, here is my template I use :

My Sample Class:

 /**  
  * Created by chshah on 9/14/2017.  
  */  
 public with sharing class My {  
   public static List<Contact> changeContactName(List<Contact> contacts) {  
     for(Contact c : contacts ) {  
       c.firstName = c.firstName.toUpperCase();  
     }  
     update contacts;  
     return contacts;  
   }  
 }  


Test Class:

 /**  
  * Created by chshah on 9/14/2017.  
  */  
 @isTest  
 private class MyTest {  
   @testSetup  
   private static void testSetup() {  
   }  
   @isTest  
   private static void testChangeContactName() {  
     Profile p = [SELECT Id FROM Profile WHERE Name='Standard User'];  
     User u = new User(Alias = 'standt', Email='standarduser@testorg.com', EmailEncodingKey='UTF-8', LastName='Testing', LanguageLocaleKey='en_US', LocaleSidKey='en_US', ProfileId = p.Id, TimeZoneSidKey='America/Los_Angeles', UserName='standarduser@testorg.com.testorg');  
     System.runAs(u) {  
       Contact c = MyTestFactory.createContact('Chintan','Shah');  
       Test.startTest();  
       List<Contact> contacts = My.changeContactName( new List<Contact> {c} );  
       Test.stopTest();  
       for(Contact con : contacts ) {  
         System.assertEquals(con.firstName, con.firstName.toUpperCase(), ' firstName must be in upper case ' + con.firstName );  
       }  
     }  
   }  
 }  


Data Factory:


 /**  
  * Created by chshah on 9/14/2017.  
  */  
 @isTest  
 public class MyTestFactory {  
   @TestVisible  
   private static Contact createContact(String firstName, String lastName) {  
     Contact c = new Contact(firstName = firstName, lastName = lastName );  
     insert c;  
     return c;  
   }  
 }