Sunday, January 23, 2011


DQL to find queued items

selectg.r_object_id,g.object_name,q.date_sent,g.r_object_type,g.r_content_size,q.sent_by from gar_global g, dm_queue  q where q.name='dm_mediaserver' and g.r_object_id=q.item_id order by q.date_sent

================================================
API to dequeue items

dequeue,c,<stamp>

================================================
Status of a workflow can also be monitored using certain DQL statements.
The following are a few DQL queries that can be used for the monitoring purposes:

select  wf.object_name as workflow_name, wf.r_creator_name as creator, qi.task_name as task_name, qi.task_state as task_state, wi.r_performer_name as performer, wi.r_creation_date as creation_date, wi.r_due_date as due_date, wi.r_priority as priority, wi.r_sign_off_req as sign_off_req from dm_workflow wf, dmi_queue_item qi, dmi_workitem wi where qi.router_id = wi.r_workflow_id and wi.r_workflow_id=wf.r_object_id

This query will query the workflow and its activity with the status, performer, dates, priority, and sign off request.

workflow_name: is the name of the workflow that in currently active

creator: is the user who starts the workflow (usually the user that is provided to run the workflow job

task_name: current task/activity that the workflow is running

performer: user who is assigned to perform the task

task_state: status of the task. (dormant/acquire/finish)

creation_date: start date of the activity

due_date: exist if the activity has a timer required to be finished with in certain time

priority: priority of the activity

sign_off_req: is 1 if the activity is needed to be sign-off before finishing the task
-------------------------------------------------------------------------------------
select wf.object_name as workflow_name, pkgs.r_package_name as package_name, pkgr.r_component_chron_id as component_id, pkgr.r_component_name as component_name from dm_workflow wf, dmi_package_rp pkgr, dmi_package_sp pkgs where wf.r_object_id = pkgs.r_workflow_id and pkgr.r_object_id=pkgs.r_object_id

This query will query the package that is attached to the workflow and its attached objects.
 
workflow_name: is the name of the workflow that in currently active

package_name: package that is used within the workflow

component_id: an object id of an object that is in the package

component_name: an object name of an object that is in the package


=============================================
Count Objects by Month or Year 

select DATETOSTRING(r_creation_date, 'yy/mm') as period, count(*)
from dm_document group by DATETOSTRING(r_creation_date, 'yy/mm')
============================================

Get Value Assistance for an Attribute

select valid_values as type_value from dm_value_list where r_object_id in
(select default_id from dm_cond_id_expr where r_object_id in
(select cond_value_assist from dmi_dd_attr_info where type_name='employee' and attr_name='first_name'))
order by valid_values

=======================================================

It is the query that the agent exec process uses to work out what jobs to run next.

SELECT ALL r_object_id, object_name, a_next_invocation FROM dm_job WHERE ( (run_now = 1) OR ((is_inactive = 0) AND ( ( a_next_invocation <= DATE('now') AND a_next_invocation IS NOT NULLDATE ) OR ( a_next_continuation <= DATE('now') AND a_next_continuation IS NOT NULLDATE ) ) AND ((expiration_date > DATE('now')) OR (expiration_date IS NULLDATE)) AND ((max_iterations = 0) OR (a_iterations < max_iterations))) ) AND (i_is_reference = 0 OR i_is_reference is NULL) AND (i_is_replica = 0 OR i_is_replica is NULL) ORDER BY a_next_invocation, r_object_id



============================================

 Index Queue Count

select count(*) from dmi_queue_item a, dm_sysobject (all) b where b.r_object_id = a.item_id and b.object_name = a.item_name and (a.task_state = 'acquired' or a.task_state is NULL or a.task_state = '') and a.name = 'dm_fulltext_index_user'

============================================

To add a format

create dm_format object set name='jpeg_ws', set description='Custom alticor Thumbnails', set mac_creator='????', set mac_type='JPEG', set dos_extension='jpg', set mime_type='image/jpeg', set default_storage='280175d680000114'


=============================================
To See active sessions

execute show_sessions

execute list_sessions

execute count_sessions

==========================
Display objects with Cabinet

select d.object_name as "Document", c.object_name as "Repository",f.object_name from dm_cabinet c,dm_folder f, dm_document d where f.i_cabinet_id=c.r_object_id and any d.i_folder_id=f.r_object_id

================================
To find file system path of an object’s content file

select file_system_path from dm_location where object_name in (select root from dm_filestore where name in(select a_storage_type from dm_document where r_object_id = ‘object_id’))


========================================
To find the default file store of a type

select name from dm_store where r_object_id in(select default_storage from dmi_type_info where r_type_id in(select r_object_id from dm_type where name = ‘my_document’))

========================================
To find users and  groups they belong to

select u.r_object_id,u.user_name,u.user_state,u.r_modify_date,g.group_name from dm_user u, dm_group g where u.user_name not like 'ldap%' and u.user_name not like 'dm_%' and any g.users_names = u.user_name and u.user_source='LDAP' order by u.user_name,g.group_name

============================================
DQL to get folder path


SELECT s.object_name, f.r_folder_path FROM dm_sysobject s, dm_folder f WHERE s.i_folder_id = f.r_object_id AND f.r_folder_path != ‘’ AND additional conditions ENABLE (ROW_BASED)


=======================================================

Wednesday, January 19, 2011

Dump and Load Syntax

The following set of commands will dump all of your relevant information out of your docbase. It will extract all of your content objects (and their content), as well as your formats and users:

create,c,dm_dump_record
set,c,l,file_name
c:\temp\adump.new
set,c,l,include_content
T


append,c,l,type
dm_sysobject
append,c,l,predicate
1=1

append,c,l,type
dm_format
append,c,l,predicate
1=1

append,c,l,type
dm_user
append,c,l,predicate
1=1

save,c,l
getmessage,c 
 
This script dumpts all dm_sysobject objects, dm_format objects, and dm_user objects from you docbase. Also the content of these objects will be included. You will notice that for each object type we append a predicate of “1=1″. Since the predicate is required, this is a way of tricking Documentum into exporting all objects.

How to Perform a Load

The load process is very similar to the dump process. First, you will probably want to create a brand new docbase. It is possible to load into an existing docbase, but depending on what is in your dump file, you may end up with conflicts in your docbase. You will then want to move your dump file onto the same server as your new docbase (in our case, adump.new). This will reduce network traffic and speed up your load process.
The first step in actually performing the load, is to run “preload”. This is another utility program that should be located on your Documentum server. It can be executed from the following command line:
preload -dump_file -script_file -U -P 
 
This performs several checks on your docbase. It may produce a script that you will need to modify and run. If so, the script is well commented and tells you what edits to make. Next, run the iapi32 application again, this time connecting to your new docbase. The following script, when executed, will perform the load process.

create,c,dm_load_record
set,c,l,file_name
c:\temp\adump.new
save,c,l 

Documentum does not overwrite objects during the load process.

It is therefore possible that you will have duplicate objects in you docbase after a load.
This is common for generic objects that are created with every docbase(such as templates).
It can also occur if you perform two loads from dump files that contain the same objects.
There is no real way to avoid this. Your only option is to manually delete the unwanted duplicates.
In our example, we specified how to set parameters on what objects are dumped from your source docbase.
  • If you intend to perform a full docbase dump
  • It is recommended that you use the following syntax:  
    
    set c,l,dump_operation
    full_docbase_dump 
    
    
  • Registered tables are not included in your dump (or load) process.
  • You must manually move and re-register these in your new docbase.
  • Thursday, January 13, 2011

    How to Clone the Content Server

    We followed below steps, I am not sure whether it is best way to do it or not, but it worked for us

    1. Created VMWare instance, installed content server on new VMWare host and used same repository id that of production
    2. After sucessful installation, stopped all services
    3. Dump the production database and load it into database used in new installation
    4. Copy the %DCTM_HOME%\data directory of production repository and to data directory on VMWare host
    5. Restart the server and I could able to see all data to perform tests.