ANT version clash with Documentum and DFS

July 6, 2008

I was compiling the DFS samples uisng the ANT build file provided with the EMC SDK download.  I hit an error with ANT version while running the “ant artifacts” target.  I downloaded a fresh binary of ANT 1.6.5 from Apache website.  But I kept getting the same error.

Invalid implementation version between Ant core and Ant optional tasks.
core    : 1.6.2
optional: 1.6.5

The reason seems to be due to the fact that dctm.jar contains a MANIFEST.MF file which in turn has a reference to (older) ANT jar files.
Shared/ant.jar Shared/ant-ext.jar Shared/ant-launcher.jar


Check all your class paths for any JAR files that may have a manifest file which refers to old versions of ANT jar files.

– Remove dctm.jar from the CLASSPATH.
– Add the individual jar files from – D:\Program Files\Documentum\Shared to the CLASSPATH.
– Exclude the – ant.jar, ant-ext.jar and ant-launcher.jar files from the CLASSPATH

set DCTM_SHARED=D:\Program Files\Documentum\Shared



Documentum D6 Aspects

December 12, 2007

Aspects are new in D6. You can find an excellent post on this blog regarding the features and usage of aspects. I will not repeat the information.

What I will do is talk about how we are using Aspects in our current implementation.

We are migrating about 50,000 documents from IBM Domino.doc to Documentum. We decided to build a custom application to do this. We have to make sure that all the meta-data from Domino.Doc is moved to Documentum. In addition, we want to keep track of the migration audit trail such as – Who migrated this document, when, what was the document’s audit history in Domino.Doc.

We have several custom doctypes. But the parent of all these doctypes is a document called sp_document which has all the standard meta-data derived from Dublin-core and SGMS. Our initial approach in V5.3 was to create an attribute called sp_migration(2000 chars) and store the migration audit trail here. But this was problem because this was going to impact both migrated documents and the future documents. Future documents will have an empty sp_migration attribute which is not desirable.

Creating an aspect to hold the migration audit trail was a much better solution. We created the sp_document without the sp_migration attribute. We created an aspect “my_migration_aspect” with an attribute “migration_comments”. In our custom migration code, we attached the aspect to the newly migrated documents and set the migration history.

What do you need?

As of today, the documentation provided by Documentum for aspects is meager. Javadocs are missing for com.documentum.fc.client.aspects package. The tool you will definitely need to create aspects – “AspectCreator” tool. I got this tool from EMC during our Documentum Training session – “What is new in D6?” Unless D6 SP1 comes up with tools for aspects, you will have to get this tool from EMC.

I have tried ALTER of aspects from DQL. I got some parser exceptions. So you will have to depend on the “AspectCreator” tool.

Steps involved:

– You will have to create an Interface and Implementation class for the aspect you are creating. You can find the example code in DFC 6 Development Guide. Compile these and create separate .jar files. This step involves some simple programming.

 – Update the .xml file provided with “AspectCreator” tool to specify the details of the aspect you are creating and the doctype you are allowing the aspect to be applied. I could not find any documentation for this.
VERY IMPORTANT: From trial and error, I learnt that if you are creating a STRING attribute, you have to specify the type along with the string length as STRING(2000) . If you don’t specify the length you will get a StringIndexOutOfBounds exception.

– Run the command line tool create.cmd provided with “AspectCreator” to create the aspect in the docbase. Make sure you update this .cmd file with the correct paths and parameters.

Some Gotchas:

I have faced some strange errors such as ClassCastException when running Aspect code from IDE (Eclipse). The same code ran fine when run from command line using ANT.

I will try to upload some sample code in my next post.

Good luck.

Some issues developing/migrating DFC programs to D6

September 14, 2007

This is a new post after a long time. I have started working on D6. Here is the first set of problems I faced and some solutions.

If you see this exception, it is most probably because you have some classes compiled with JDK 1.4.x and others with JDK 1.5.x
[java] java.lang.UnsupportedClassVersionError: com/documentum/fc/common/DfE xception (Unsupported major.minor version 49.0)

Change to JDK 1.5 for compilation, execution, etc.
If you are using an IDE like eclipse make sure you change there too. Infact if you have access to all the source code, I recommend you delete all the .class files and do a complete rebuild. This will save you a lot of head-aches.
I am not sure if this has any impact on exisiting ANT scripts. If anyone tried this out and has a solution, please share

If you see the following exceptions, it caused because new attributes have been added to the dm_type in D6 which are not available in v5.3.


This happens if you are using DFC 6 to connect to 5.3 Repository

From my tests, I could confirm that:

DFC 6 connecting to 5.3 Repository throws exceptions/errors. DFC 6 connecting to D6 Repository works fine (of course)

DFC 5.3 connecting to D6 Repository works fine
Exception #1:

DfException:: THREAD: main; MSG: [DM_OBJECT_E_LOAD_COUNT_ERROR]error: “error loading object — wrong object count for object of type dm_type; found 15 attributes, but type says there should be 21”

[DM_OBJECT_E_LOAD_COUNT_ERROR]error: “error loading object — wrong object count for object of type dm_type; found 15 attributes, but type says there should be 21”

Exception #2: (During import operation)

java.lang.ArrayIndexOutOfBoundsException: 19

at com.documentum.fc.client.DfAttrTable.putAttr(

at com.documentum.fc.client.DfTypedObjHelperSessionBased.getAttr(

at com.documentum.fc.client.DfTypedObjHelperSessionBased.getAttr(

at com.documentum.fc.client.DfTypedObject.getAttr(

at com.documentum.fc.client.DfTypedObject.hasAttr(

at com.documentum.fc.client.DfFormat___PROXY.hasAttr(

at com.documentum.operations.DfXMLUtils.checkFormat(

at com.documentum.operations.DfXMLUtils.isXML(

at com.documentum.operations.DfApplyXMLForImport.applyXMLApplication(

at com.documentum.operations.DfApplyXMLForImport.execute(

at com.documentum.operations.DfOperationStep.execute(

at com.documentum.operations.DfOperation.execute(

Good luck.

Documentum Search Audit Trails

February 17, 2007

Q) How to collect audit trails of searches performed by users?   
Documentum does not capture any audit events for searches performed.  However, search statistics and reports can also be used to identify frequently used keywords and tune the search engine to provide accurate results. 

The statistics can also be used for creating management reports if needed.

Design Approach:
1. Create a new persistent object (“sp_search_log”) to store Search log information
2. Customize the search component’s behaviour class’s onRenderEnd() method to create a new “sp_search_log” object
3. Save the object before displaying the JSP

Alternate Approaches:
1. Use JDBC to capture the information in a database table.  Complicated approach involving opening database connections.
2. Create custom audit trails to create dm_audittrail objects.  I have not yet studied the implications of this. 

CREATING  A NEW TYPE to store Search Logs:
CREATE TYPE "sp_search_log"
( "r_search_id" ID,
"userid" CHAR(10),
"userdisplayname" CHAR(200),
"deptcode" CHAR(6),
"keyword" CHAR(100) REPEATING,
"location" CHAR(250) REPEATING,
"attrib_namevalue" CHAR(250) REPEATING,
"starttimeofsearch" DATE,
"endtimeofsearch" DATE,
"noofresults" INT,
"noofvieweddocs" INT

OUTPUT OF DQL > new_object_ID  030004d2800001b9 

ALTER TYPE “sp_search_log” DROP_FTINDEX ON “userid”
ALTER TYPE “sp_search_log” DROP_FTINDEX ON “userdisplayname”
ALTER TYPE “sp_search_log” DROP_FTINDEX ON “deptcode”
ALTER TYPE “sp_search_log” DROP_FTINDEX ON “location”
ALTER TYPE “sp_search_log” DROP_FTINDEX ON “attrib_namevalue”

Use this DQL to drop any fields if needed:
ALTER TYPE “sp_search_log” DROP “Field-Name” PUBLISH

Use this DQL to add new fields if needed later:
ALTER TYPE “sp_search_log” ADD “New-Field-Name” DATE PUBLISH
– “attrib_namevalue” CHAR(250) REPEATING will be used to store the params from advanced search in the form date=22/01/2006, etc.
– If the user uses a phrase search like “new york”, it can be stored in one keyword.  If new york is used without quotes, it will be stored as two keywords

Note:  This code is meant to prove the concept.  This may not be the best approach for performance.
A better approach could be to store the “starttimeofsearch” in an instance variable then, create & save the

sp_search_log object only once after the search operation is completed.

public class SearchEx extends
implements IControlListener, IDfQueryListener, Observer,
IReturnListener, IDragDropDataProvider, IDragSource, IDropTarget
private boolean m_loggedToDB = false;
private boolean m_loggedNoOfResultsToDB = false;
private boolean m_isFirstCall = true;
public void onInit(ArgumentList args)
System.out.println("## Inside custom search");
String strQuery = args.get("query");
System.out.println("## strQuery: " + strQuery);

public void onRenderEnd()

if(m_loggedToDB == false && m_isFirstCall==true) {
m_isFirstCall = false;

if(m_loggedToDB == true && m_isFirstCall==false && m_loggedNoOfResultsToDB==false) {

private void createSearchLogObject(){
String objectId = null;

IDfSession sess = this.getDfSession();

String userid = "Not found";
try {
userid = sess.getLoginUserName();
System.out.println("### userid: " + userid);

String queryDesc = getQueryDescription();
System.out.println("### queryDesc: " + queryDesc);

IDfPersistentObject searchLog =
searchLog.setString("userid", userid);
searchLog.setString("userdisplayname", userid);
//searchLog.setString("deptcode", "DEPT_CODE GOES HERE");

IDfTime timeNow = new DfTime();
searchLog.setTime("starttimeofsearch", timeNow);
setNewValuesForAttribute("keyword", queryDesc, " ", searchLog);

String searchLocations = getSearchSources();
setNewValuesForAttribute("location", searchLocations, ",", searchLog);;

m_NewSearchLogObjectId = searchLog.getObjectId().getId();
System.out.println("************ Saved Search Log ************" + objectId);
m_loggedToDB = true;
} catch (DfException e) {


private void updateSearchLogObject(){
System.out.println("### Updating the record");

Datagrid datagrid = (Datagrid)getControl("doclistgrid",

//Get total number of results available from the underlying DataHandler
//Note that a value of -1 indicates that the DataHandler does not support results counting.
int noOfResults = datagrid.getDataProvider().getResultsCount();
System.out.println("Datagrid noOfResults: " + noOfResults );
if(noOfResults != -1) {
IDfSession sess = this.getDfSession();

IDfClientX clientx = new DfClientX();
try {
IDfPersistentObject searchLog = (IDfPersistentObject)sess.getObject(

IDfTime timeNow = new DfTime();
searchLog.setTime("endtimeofsearch", timeNow);
System.out.println("************ Updated Search Log ************");
} catch (DfException e) {

private void setNewValuesForAttribute(String attributeName,
String queryString, String delimiter, IDfPersistentObject obj) throws DfException {

StringTokenizer st = new StringTokenizer(queryString, delimiter);
for (int i = 0; st.hasMoreTokens(); i++) {
obj.appendString(attributeName, st.nextToken());