Quantcast
Channel: The Siebel Scholar
Viewing all 37 articles
Browse latest View live

Activity Plans vs SMART Templates

$
0
0
I wrote a post recently about a new service offering I have been working on to greatly speed up process automation. So this is another entry into my teaser series. Let me start by describing what is probably Siebel's first and most basic attempt of process automation: the Activity Plan. Activity Plans have been around in Siebel going back a long time. I remember them in 2000 and they may have been there in 99.5 though to be honest I don't recall exactly when they made their appearance. Basically, an administrator creates an Activity Template consisting of a series of Activities. A user can then either automatically trigger the creation of an instance of this template (an Activity Plan) from an Opportunity Sales Stage transition or manually add one to any other object. Once the Plan is added, the Activities are automatically generated. This sounds great to a lot of business stakeholders as it sounds like something they can apply in many scenarios. It's strengths are:
  • Can set any/all fields on an activity
  • Creates many activities at once, saving manual effort.
Unfortuneately, once you start gathering any sort of requirements for a business process, you will start to stumble across the weaknesses:
  • Fields can only be set to constant values (this really impacts dates when it comes to activities)
  • Activities are created all at once, so any type of sequencing is impossible
  • This functionality only exists for Activities (No Service Requests, or other custom objects)
Now with customization, there are ways to get around some of these limitations, but at some point, you will probably end up either building something completely different or bastardizing the Activity BC itself.

In my last post on this topic I introduced you to the RARE Engine (Rule-based Approvals Routing and Escalations). This is really two parts. I already touched on its features. There is actually a second component of my automation suite which I have branded SMART Templates. What a SMART Template does is to create a task record and set fields on that record, while addressing all of the deficiencies of the Activity Plan:
  • Can create/update records of any type (administrator specifies the BC)
  • Can evaluate fairly complex expressions including date math to set fields
And when used in combination with the RARE Engine:
  • Records can be created in batches at different points in time, dependent on the completion of prior tasks.
OK, now we are getting somewhere. So once this service offering is implemented, any process can be maintained through the Siebel UI. You need to change the threshold at which a VP needs to approve an Order? No problem. You need to notify an additional person at a point in the New Customer Onboarding process? Ok. Or you need to create three new Service Requests when Final Contract approval is given? You got it. You want to update the quote status when the customer approves it through your eSales portal? You betcha. All these things can be done by an administrator in Real Time.

Just remember, most processes are just a series of steps executed by people or systems. What the RARE/SMART Suite provides is a way to implement automation quickly, maintain those steps in the Siebel GUI, and enrich the processes themselves (Reporting, Reliability, Refinement).

eScript Framework - Logging Variables

$
0
0
Here is another entry into the logging framework previously discussed. The idea behind this function is to Log multiple variable values to separate lines of our log file using a single line of script. This keeps our script pristine and makes the log file well organized and easy to read. The script to call the function is as follows:

Log.stepVars("Record Found", bFound, " Account Id", sId);
The expected arguments are name/value pairs where the name is a descriptive string (could just be the name of the variable) and the value is the variable itself that we want to track. There is no limit to the number of pairs. There is an optional last parameter to indicate the enterprise logging level (stored in system parameters) above which this line should be written.

The results will be written to the log file as:

06/29/2010 13:33:53 ................Record Found: Y
06/29/2010 13:33:53 ..................Account Id: 1-ADR45
The script to implement follows. This is meant to be added as a new method in the 'eScript Log Framework' business service.

function stepVars () {
var Args = arguments.length;
var iLvl = (Args % 2 == 0 ? 0 : arguments[Args - 1]);
var iParams = (Args % 2 == 0 ? Args : Args - 1;
var sProp, sValue;

for (var i=0; i < iParams; i++) {
sProp = arguments[i++]+": ";
sValue = arguments[i];
Log.step(sProp.lPad(30, ".")+sValue, iLvl);
}
}
Also, a new line will need to be added to the Init section to instantiate this function on Application start:

Log.prototype.stepVars = stepVars;
I want to draw particular attention to two java features which may be useful in other applications. The first is how to reference a variable number of arguments in a function call. Notice the special array variable, 'arguments'. This array is defined as all of the arguments passed to this function with no special declarations. It can be referenced just like any other array. There are some exceptions with how this array can be manipulated though with push() and pop() not working as you might expect.

The second, is how to assign a variable using an inline if, ( condition ? value if true : value if false). The condition is any expression that will evaluate to either true or false. The first expression after the ? is the value returned if the condition evaluates to true, and the last expression is what is returned if the consition evaluates to false.

Economies of Scale - Data Edition

$
0
0
In the process of describing how a typical siebel installation reaches maturity, I summarized it thus:
...for any client, the first release or three are about implementing a robust data model, rolling on as many business units as possible to take advantage of the enterprise nature of that data model and gaining economies of scale, and maybe implementing some integration to get legacy data into Siebel
It strikes me that embedded in that sentence is another big picture concept I want to go into further detail about. Putting a call center on Siebel is nice for the Call Center and the managers of that call center from an operational standpoint. Putting a Sales division on Siebel is nice for those sales people and their managers too. In both cases, whenever a customer calls, the business case of using Siebel as a data model applies when we find that this customer has called before and we leverage that information to assist us on the current call.

Perhaps it is obvious, but it is even better when multiple business units are on Siebel, such that any given business unit can leverage the touchpoint history of the other business units when transacting with a customer who has corresponded with both. In other words, if a customer calls the Call Center, and the operator records information about that call, the Sales person can also leverage that same information, and the marketing division can market to that customer from the same database. This is what we mean when we talk about the enterprise nature of the application. The underlying data is to some extent shared with whatever visibility rules are deemed appropriate.

This is useful in the following ways:
More likely to get a hit when looking up a master data record.
Reduces the need to key in master data information that has been entered before
Increases the speed at which the user can transact the true nature of the call
Reassures the customer that they are known by the business
Allows user (or analyst or system) to identify a trend in the customer's transactions

There will often be a tension between choosing the best application to perform a certain task and gaining the economies of data scale identified above. This tension can be mitigated somewhat through good integration but it is unlikely to go away completely. That is, SAP may be a better inventory management application, so there is a tension between storing my inventory information in SAP which has built in and customizable algorithms, and storing it in Siebel, which while not as robust, has the advantage of making that data available in Siebel views and linking it to Siebel objects easily. Like I said, we can integrate SAP to Siebel, but this adds cost and complexity (and probably lag time). That does not mean it is not the right decision. In the case of inventory management, depending on how important that functionality is to the customer's core business, it may very well be the right decision. I just want to point out the tension between these concepts.

eScript Framework - GetRecords

$
0
0
Matt has launched YetAnotherSiebelFramework, a blog about... you get the idea. This is an important step forward in this community's attempt to create a true open source Siebel eScript framework. He adds flesh to the skeleton I have assembled here. He will shortly be adding his own posts to explain his functions in more detail but I thought I would get a head start with starting a discussion about one of his most important pieces, the GetRecords function. I say one of the most important pieces, as the real driver behind this solution is to replace the many plumbing steps, as Matt calls them, that sit in so much of our script. So for instance to query an Account by Id (sId) to get the Location for instance, you would write something like this:
var boAccount = TheApplication().GetBusObject("Account");
var bcAccount = boAccount.GetBusComp("Account");
with (bcAccount) {
ActivateField("Location");
ClearToQuery();
SetViewMode(AllView);
SetSearchSpec("Id", sId);
ExecuteQuery(ForwardOnly);

if (FirstRecord()) {
var sLoc = GetFieldValue("Location");
}
}
You get the idea. His function essentially replaces this with:
var sLoc = oFramework.BusComp.GetRecord("Account.Account", sId, ["Location"]).Location;
So that is pretty cool. What follows is mostly quibbling but I think attracting criticism from our peers is the best way to make this framework the most usable it can be. On a technical note, I am using 7.8 and the T engine for my personal sandbox so have not yet been able to get Matt's entire framework up and running. Nevertheless, I have gotten his individual functions running so I will limit my discussion to that scope. Here are my thoughts:

(1) My biggest point is to think about whether it makes more sense to return a handle of the BC rather than filling an array. I am thinking about this in terms of performance. There are times when having the array would be useful, like say when I want to perform array operations on the data, like doing a join. But often, I may just need to test a field value(s) and perform operations on other values conditionally. In this case, I would only be using a small percentage of the data I would have filled an array with. It may also be useful to have a handle in order to use other Siebel BC functions like GetAssocBusComp or GetMVGBusComp. I do not claim to be a java guru, but I am curious about the performance implications. What I have done with my own framework is to build three functions:
  • Bc_GetArray (this is basically the same as Matt's)
  • Bc_GetObject (stops before filling the array and just returns the handle to the BC)
  • Bc_GetInvertedArray (Same as Matt's but makes the fields the rows and the record the column)
(2)I took out the following two lines:
aRow[aFields[i][0]] = vValue;
if (aFields[i][0].hasSpace()) aRow[aFields[i][0].spaceToUnderscore()]= vValue;
that checks if the field name has a space and if so changes it to an underscore and replaced them with a single line:
aRow[aFields[i][0].spaceToUnderscore()]= vValue;
I think this should be more efficient since a regular expression search is being done regardless, I think just doing the replace in one step saves an operation.

(3) I like the first argument, "Account.Account" syntax for most situations. I think we can make this even more robust though by allowing us to pass in an already instantiated BC. This is probably infrequently necessary moving forward with the pool concept Matt has introduced, but there is a low cost way to handle either. What I have done is to add a test of the data type:
if (typeof(arguments[0])=="string") {
before starting the pool logic. I then added an else to allow us to pass a BC object in and add it to the pool:
else {
oBc = arguments[0];
this.aBc[oBc.Name()] = oBc;
}
(4) I think I understand where Matt is going with the pool as a mechanism to instantiate BCs less frequently. His bResetContext argument, the flag indicating that the pool be flushed, is I think unnecessarily too drastic. If I understand it correctly, setting this flag to true would flush the entire pool. While this may sometimes be desired, it seems more useful to just flush the BO/BC in play. This would allow you to write code for instance in nested loops that jumps between BCs without clearing context when it is not necessary too. I may not be thinking of a situation where this would be necessary though so if anyone can think of one I am all ears. My recommendation would be to make the flush just clear the passed BO/BC but if the "Full flush" is necessary, then perhaps a code indicating one or the other can be used. This could be accomplished by just removing the reference to the FlushObjects function, as the following if/else condition effectively resets the BO/BC variables in the array after evaluating the bResetContext argument.

Expectations and Changes

$
0
0
When doing a Siebel project, there will always be a balancing act between managing client expectations and delivering everything the customer wants. I am not even trying to finesse when I say managing client expectations. The way I put it in that sentence, you may have inferred I meant not delivering what the customer wants. But that is not really the case, as frequently, the client does not necessarily know what they want, or their understanding of what they want evolves as they understand the capabilities and the implications of a CRM strategy/product.

We see this unfold in different ways on different projects. In a green field implementation (new to Siebel), Phase I is typically a data model implementation where the majority of the development work revolves around building views. Now there is obviously a lot that goes on behind the scenes, but from a Client's point of view, we are mostly showing them views, and using a view as a way to communicate the concepts of a data model. That is, the view becomes the way to communicate relationships and attributes. The presence or absence of a field on a view becomes a visual indicator of whether a logical attribute exists or not in our build out. An attribute expressed as a single value field in an applet provides a visual cue that a user can only enter one value. Because the views provide extensive visual reinforcement, it is easy for stakeholders to identify gaps through the testing and acceptance process by saying, ahah, I do not see this field, or I need to enter more than one of that value, or their needs to be a view linking these two objects.

Integration based projects tend not to have the same issues when integrating to a legacy system as there are typically a pair of technical architect types that are fairly knowledgeable about the preexisting data models of each application. The project is mainly a matter of synchronizing these efforts. Testing and user acceptance though can again identify visually when a field or record set is blank to recognize that a gap exists.

Where I am leading to with all this is the nature of an automation oriented project. Automation is by it's nature typically new. Perhaps the steps have existed, but the mechanisms we are using to automate, to add speed to the process, have never existed before. This adds some expectation management issues that are a bit different than in other types of projects. The types of changes necessary have an added dimension. Gaps in the specifications will likely be caught dring the testing phase, such as a field not being populated or a decision branch executing on the wrong condition. The added dimension is time and frequency. For instance, a popular way to automate processes is to add reminders to a process when steps are not executed, or to change a status of a record to indicate an escalation in priority or status. I would posit that users do not really know how frequently they will want to be reminded because they do not necessarily have a sense of the scale or frequency of the events. Frequently, during an interdepartmental process, one department may perceive the severity of an issue as higher that the department they are working with. These are important considerations because a user that is reminded too frequently (when in fact they are aware of a task but are waiting on other deliverables in the normal course of performing it) will begin to ignore the reminders. Being informed of a number of outstanding items on a too frequent basis will cause us to phase it out as anything that can be happening so frequently is typically thought to be not too severe.

It is likely that system users will request, some time soon after deployment, that these reminders be scaled back, and if the capability to do so has not been built into the project, to turn them off altogether. This thereby loses the value of that particular automation. So, where am I going with all this? While workflows can be redeployed without a major release, it is unlikely most Siebel project teams are actually prepared to do so on short notice. It is possible to account for this by explicitly adding requirements for it, but of course this adds complexity and scope to the project.

This is all why I built the RARE Engine to be extensively customizable in the GUI, including the turning on and off of email reminders, the setting of the text of the reminder/escalation message, and the delay interval between reminders and escalations both on a per person and per process basis. This means that after the process has been automated and deployed, an administrator can tweak these parameters to the individual needs of the user base.

About defaults, picks, maps and SetField events

$
0
0
That is an eclectic list of things in the title, and no I do not intend to talk about them all in detail other than to discuss a bit about how they interact and some of the design implications they may cause. So let me start with another list:
  • Pick Maps do not cascade
  • Fields set by a pick map cause a SetFieldValue event
  • Defaults do not cause a SetFieldValue event
  • On Field Update Set BC User Prop will trigger a SetFieldValue event
  • SetFieldValue event triggers a Pick
  • Setting a field to itself does not trigger a SetFieldValue event
So those are the important findings I had to deal with when implementing a seemingly simple requirement. My client had a contact type and sub type. The contact type should be denormalized from the related account's type. Finally, they want to set the contact sub type dynamically to a different value depending on the contact type. By dynamically, I mean not hard coded, so it can be changed without a release.

Let me put all that in functional terms by providing an example. The Account Type has a static LOV with values 'Bank' and 'Government'. The Contact can potentially be created as a child of an account, inheriting information from the account record, and triggering Parent expression default values, or can be created from the Contact screen without an account, but with the option to set the account later. When an account is specified for a contact, the contact type will be set to match the account type, otherwise the contact type should be set to 'Other'. If the Contact type is 'Bank', the contact sub type should get set to 'Retail', and if the contact type is 'Government', the sub type should be set to 'HUD'. So the basic configuration we started with was to put the desired 'dynamic' sub type value in the Low column on the LOV table. Then set up the pick map for contact type as such:

Field Picklist Field
Contact Type Value
Contact Sub Type Low

It would be convenient to just set the pick map similarly on Account Type as:

Field Picklist Field
Account Type Value
Contact Type Value

But the first rule above states this will not work because pick maps do not cascade. This makes some sense as you could conceivably end up with some circular logic. Or in the case where the contact is created as a child of an account, to predefault the Contact Type to the Account Type. But again, according to the rules above, a predefault will not trigger a SetField and hence no pick map.

So in order to trigger the pick map on Contact Type, we need to trigger a SetFieldValue event on this field. What to do. Oh, and I did not want to use script. My solution had a couple of dimensions.
  1. When a contact is created on the Contact Screen and the account is picked, I am going to trigger a set field value on the Account Type by creating a joined field on the Contact BC called Account Type, and add this field to the Account pick map. So this will trigger my SetFieldValue event. I then will add an 'On Field Update Set' BC User property to the Contact BC so that when the joined Account Type field is updated, set the Contact Type to the Account Type. Using a User Property will then trigger the SetFieldValue event on Contact Type which will then trigger the pick map to set the Contact Sub Type. So far so good.
  2. My approach on the scenario when a Contact is created as a child of an Account is not as clean. The problem here is that Predefaults do not trigger SetFieldValue events. And in this case, all the account information will already have been set via Predefault so there is no field being explicitly set by a user to trigger the User property. So I had to get creative. What I did was similar to above but placed identical user properties on Contact First and Last name fields. Since these are required fields that are typically entered first, they will trigger the user properties to set the contact type and sub type. In order to minimize the UI impacts of this admittedly kloogy design, I wanted the visible Contact Type in the applet to default correctly to the Account Type from the parent record. This means that when the User sets the First Name (or the Last) the Contact Type will already have the correct value so the User Property would essentially set it to itself. The last rule above states this will not trigger the SetFieldValue event. To get around this I create two User Properties in sequence, the first to set the Contact Type to null, and the second to set it back to the Account Type. Because I am putting the properties on both the First and Last name (to accommodate different user's field population sequences), I also want to add a conditional to the user properties to not execute if the Sub Type has already been set.
What does all this leave us with? In addition to the pick map on the Account field mentioned first, here are the On Field Update Set user properties on the Contact BC:
  1. "Account Type", "Contact Type", "[Account Type]"
  2. "First Name", "Contact Type", "", "[Contact Sub Type] IS NULL"
  3. "First Name", "Contact Type", "[Account Type]", "[Contact Sub Type] IS NULL"
  4. "Last Name", "Contact Type", "", "[Contact Sub Type] IS NULL"
  5. "Last Name", "Contact Type", "[Account Type]", "[Contact Sub Type] IS NULL"
I am going to leave it there, but this actually gets even more complicated. Because a contact can be created from a pick applet from a service request, I also had to account for predefaulting the account to the SR's account and this impact this would have on predefaulting Contact Type and Sub Type. If anyone would like to see how this is done, here is where to start.

Spelunking in the Barcode Cavern

$
0
0
My new client would like to use a Barcode scanner for a whole variety of Field Service applications:
Shipping Label to Lookup and RMA Order and update some fields
Asset Label to Lookup or Create an RMA Order Line Items and update some fields
Asset Label to Lookup a Repair record and update some fields

Siebel Bookshelf and Supported Platforms provides some basic information. There are a couple of approaches to using a Barcode scanner
  • Treat it like any data entry device. In other words, you prepare your record (click new, Clear to Query, etc.), click into a field, scan your barcode, the scanner copies the translated barcode value to the field, then you do what you want (save the record, execute query, etc).
  • Use the Barcode ToolBar. This has some basic modes (New, Update, Find), an administration area that ties a View to one or more modes and a field. So when you navigate to a view, Siebel (when the barcode toolbar is turned on through an object manager parameter), checks to see of any barcode admin records exist for that view and the currently selected mode. If so these appear in a dropdown in the toolbar that a user can select a value from. If the User then scans something, the Application "Processes" the barcode depending on the mode, either doing a query based on a specified field, updating a field on the current record, or creating a new record and populating a specified field.
This sounds groovy until you hear about some of the limitations and start thinking about a more realistic process. So here are the limitations as I understand them:
  • Only some Barcode Types (think fonts) are supported.
  • The processing can only occur in the primary BC of the BO, or the Parent BC in a master detail view.
  • Serial Numbers cannot be looked up (I am still investigating why this is but I am guessing it has to do with them possibly not being unique).
  • Only barcode scanners that support using customizable control character before and after the scanned input will work
  • A single input value is taken (so no splitting of a concatenated value)
  • You basically have to tell the toolbar what value to expect (again, no intelligent parsing)
Prototyping:
  • Insure you have the Field Service, and Barcode license keys
  • In the Field Service cfg file (if using a thick client), set the ShowBarcodeToolbar parameter to TRUE. Intuitively enough, this will make the Barcode toolbar appear in your app upon restart.
  • Click the enable button (far right hand button) on the toolbar
  • As you navigate to a view, the application will perform a query of the 'FS Barcode Mappings' BC, or S_BC_ENTRY_TRGT table for admin records corresponding to the current view and the currently selected processing mode (the three buttons to the left of the dropdown in the toolbar each correspond to a different mode). If you think about it, this is sort of similar to how Actuate reports are tied, except you can actually administer this a bit in the GUI.
  • We can mimic a barcode scan by using <ctrl-\>, followed by the translated value we are trying to scan (SR number for instance), followed by another <ctrl-\>
  • If you want to use different control character than <ctrl-\> (because maybe that one is already taken or something), these are set on the 'HTML FS Barcoding Tool Bar' business service as User Properties. I will leave them be.
So in my real life example, I will:
  1. Navigate to All Service Requests
  2. Click Enable on toolbar
  3. Click the right most left side button on the toolbar, 'Find'
  4. Leave the dropdown as 'Serial Number'
  5. Hit <ctrl-\>
  6. Type in an SR # I can see in the list
  7. Hit <ctrl-\> again
  8. The Application should query for the SR # I entered
I am now going to dive figuring out a better way to customize this behavior. I'll be back.

Hacking the 'HTML FS Barcoding Tool Bar' Business Service

$
0
0
In case you were curious what happens in the black box, once the Barcode toolbar is up and running, here is a dump of the Input and Output property sets from each Method that is called:

When the application starts up, the 'IsBarcodeEnabled' method is called about 15 times, is passed an empty property set and returns:
01  Prop 01: IsBarcodeEnabled           / 1

Also on startup, the 'ResetButton' method is called which appears to set the set which buttons on the toolbar are turned on or off and which buttons are active. Resetting them makes the enable button Active and off, and the process mode buttons inactive and off, as you can see from the outputs. Here are the Inputs:
01  Prop 01: SWECmd                      / InvokeMethod
01 Prop 02: SWEMethod / ResetButton
01 Prop 03: SWEService / HTML FS Barcoding Tool Bar
01 Prop 04: SWERPC / 1
01 Prop 05: SWEC / 1
01 Prop 06: SWEIPS / @0*0*0*0*0*3*0*

And these Outputs:
01  Prop 01: NEW_ENABLED                 / N
01 Prop 02: ACTIVE_ENABLED / Y
01 Prop 03: ACTIVE_CHECKED / N
01 Prop 04: UPDATE_CHECKED / N
01 Prop 05: FIND_ENABLED / N
01 Prop 06: FIND_CHECKED / N
01 Prop 07: NEW_CHECKED / N
01 Prop 08: UPDATE_ENABLED / N

The control keys are then determined. First the 'GetStartKeyCode' method is called with these Inputs:
01  Prop 01: SWECmd                      / InvokeMethod
01 Prop 02: SWEMethod / GetStartKeyCode
01 Prop 03: SWEService / HTML FS Barcoding Tool Bar
01 Prop 04: SWERPC / 1
01 Prop 05: SWEC / 2
01 Prop 06: SWEIPS / @0*0*0*0*0*3*0*

And these Outputs:
01  Prop 01: KeyCode                     / 220

Lastly, the End key via the 'GetEndKeyCode' method with these Inputs:
01  Prop 01: SWECmd                      / InvokeMethod
01 Prop 02: SWEMethod / GetEndKeyCode
01 Prop 03: SWEService / HTML FS Barcoding Tool Bar
01 Prop 04: SWERPC / 1
01 Prop 05: SWEC / 3
01 Prop 06: SWEIPS / @0*0*0*0*0*3*0*

And these Outputs:
01  Prop 01: KeyCode                     / 220

Clicking the enable button triggers the 'Active' method has these Inputs:
01  Prop 01: SWEActiveView               / All Service Request List View
01 Prop 02: SWECmd / InvokeMethod
01 Prop 03: SWEMethod / Active
01 Prop 04: SWEActiveApplet / Service Request List Applet
01 Prop 05: SWEService / HTML FS Barcoding Tool Bar
01 Prop 06: SWERPC / 1
01 Prop 07: SWEC / 22
01 Prop 08: SWEIPS / @0*0*0*0*0*3*0*

and these Outputs:
01  Prop 01: OPTION0                     / Service Request
01 Prop 02: NEW_ENABLED / Y
01 Prop 03: OPTION2 / Repair
01 Prop 04: ACTIVE_ENABLED / Y
01 Prop 05: ACTIVE_CHECKED / Y
01 Prop 06: OPTION3 / Pick Ticket
01 Prop 07: UPDATE_CHECKED / N
01 Prop 08: OPTION6 / Serial #
01 Prop 09: FIND_ENABLED / Y
01 Prop 10: Check / 1
01 Prop 11: OPTIONS_LENGTH / 7
01 Prop 12: OPTION4 / Order
01 Prop 13: FIND_CHECKED / Y
01 Prop 14: OPTION5 / Product
01 Prop 15: NEW_CHECKED / N
01 Prop 16: OPTION1 / Asset #
01 Prop 17: UPDATE_ENABLED / Y

Clicking the Find button gives you these Inputs:
01  Prop 01: SWEActiveView               / All Service Request List View
01 Prop 02: SWECmd / InvokeMethod
01 Prop 03: SWEMethod / Find
01 Prop 04: SWEActiveApplet / Service Request List Applet
01 Prop 05: SWEService / HTML FS Barcoding Tool Bar
01 Prop 06: SWERPC / 1
01 Prop 07: SWEC / 11
01 Prop 08: SWEIPS / @0*0*0*0*0*3*0*

And these Outputs:
01  Prop 01: OPTION0                     / Service Request
01 Prop 02: NEW_ENABLED / Y
01 Prop 03: OPTION2 / Repair
01 Prop 04: ACTIVE_ENABLED / Y
01 Prop 05: ACTIVE_CHECKED / Y
01 Prop 06: OPTION3 / Pick Ticket
01 Prop 07: UPDATE_CHECKED / N
01 Prop 08: OPTION6 / Serial #
01 Prop 09: FIND_ENABLED / Y
01 Prop 10: Check / 1
01 Prop 11: OPTIONS_LENGTH / 7
01 Prop 12: OPTION4 / Order
01 Prop 13: FIND_CHECKED / Y
01 Prop 14: OPTION5 / Product
01 Prop 15: NEW_CHECKED / N
01 Prop 16: OPTION1 / Asset #
01 Prop 17: UPDATE_ENABLED / Y

Clicking the New button (on the toolbar) gives you these Inputs:
01  Prop 01: SWEActiveView               / All Service Request List View
01 Prop 02: SWECmd / InvokeMethod
01 Prop 03: SWEMethod / New
01 Prop 04: SWEActiveApplet / Service Request List Applet
01 Prop 05: SWEService / HTML FS Barcoding Tool Bar
01 Prop 06: SWERPC / 1
01 Prop 07: SWEC / 23
01 Prop 08: SWEIPS / @0*0*0*0*0*3*0*

And these Outputs:
01  Prop 01: OPTION0                     / Serial Number Entry
01 Prop 02: NEW_ENABLED / Y
01 Prop 03: ACTIVE_ENABLED / Y
01 Prop 04: ACTIVE_ENABLED / Y
01 Prop 05: UPDATE_CHECKED / N
01 Prop 06: FIND_ENABLED / Y
01 Prop 07: Check / 1
01 Prop 08: OPTIONS_LENGTH / 1
01 Prop 09: FIND_CHECKED / N
01 Prop 10: NEW_CHECKED / Y
01 Prop 11: OPTIONS_LENGTH / 7

Clicking the Update button (on the toolbar) gives you these Inputs:
01  Prop 01: SWEActiveView               / All Service Request List View
01 Prop 02: SWECmd / InvokeMethod
01 Prop 03: SWEMethod / Update
01 Prop 04: SWEActiveApplet / Service Request List Applet
01 Prop 05: SWEService / HTML FS Barcoding Tool Bar
01 Prop 06: SWERPC / 1
01 Prop 07: SWEC / 24
01 Prop 08: SWEIPS / @0*0*0*0*0*3*0*

And these Outputs:
01  Prop 01: OPTION0                     / Asset
01 Prop 02: NEW_ENABLED / Y
01 Prop 03: ACTIVE_ENABLED / Y
01 Prop 04: ACTIVE_ENABLED / Y
01 Prop 05: UPDATE_CHECKED / Y
01 Prop 06: FIND_ENABLED / Y
01 Prop 07: Check / 1
01 Prop 08: OPTIONS_LENGTH / 1
01 Prop 09: FIND_CHECKED / N
01 Prop 10: NEW_CHECKED / N
01 Prop 11: UPDATE_ENABLED / Y

And perhaps the most important one, scanning the data. This executes the 'ProcessData' method and would occur after the second end control character is received from the scanner. The Inputs are:
01  Prop 01: OPTION                      / Service Request
01 Prop 02: BARCODE / 2-7144002

And these Outputs:
01  Prop 01: Applet Name                 / Service Request List Applet

Keep in mind that in many cases, the actual property values are based on data pulled from the 'FS Barcode Mappings' BC.

The Dead Ends of Barcode Hacking

$
0
0
Most technical blog posts are about solutions. Since this series on Barcodes is also about my journey, I thought it might be interesting to also talk about what I tried out but did not work. Who knows, maybe I can save someone the effort of trying these. Or perhaps the patterns I am finding through these dead ends will help someone head off into a totally new direction as it has helped me.

Auto Enabling
So the first thing I though would be cool would be to auto enable the Barcode tool bar and the natural place to do this seemed to be on the Application Start event. After a lot of trial and error, my application kept crashing after trying to invoke the 'Active' method. The 'Active' method receives as an input the Active View Name and Active Applet Name. The startup page is not actually instantiated yet when the Application Start event executes so even hard coding a startup page into the input property set results in an application crash. So Application Start is not the right place.

Applet Context
When trying to call various barcode service methods through script, many of them require the applet name as an input parameter. Trying to use ActiveApplet though results in an error you would typically receive when you are not in a GUI context, such as when using EAI. ActiveViewName does work though so it is only the applet. I think what is happening is that clicking on a toolbar button, even though an applet appears to remain in focus (via the color pattern of the applets) focus is actually on the toolbar and hence active applet does not work. Well that is my theory anyway.

Default to Find Mode
My client will mainly be using the Find process mode so I thought it would be good that if I could not Auto Enable the tool bar, at least I could auto default the toolbar to Find mode once it is enabled. So I trapped the Active method on the business service and called the Find method from the InvokeMethod event after the Active method runs. But this does not quite work. If I click the Enable button twice though it does. It appears that this is a context issue. It is as if GUI context has been returned to the user prior to the Find script executing.

I noticed that a series of barcode events trigger anyway when the Application starts. I therefore tried triggering my auto enable scripts from the tail end of one of these events, again through the InvokeMethod event, but again ran into the context issue.

SWE From Script
The interesting thing to me is that the input parameters to all of these methods are a series of SWE Commands, Methods and parameters. It seems as though another browser thread or frame is being used where SWE commands are the language Siebel uses to initiate the logic. There is probably a way to call a SWE command directly through script but I am not aware of it. What I am thinking is to use SWE command to refresh the context of the GUI thread after a Barcode method has been called, then to explicitly call a followup method. I cannot do this directly as the results of the second method call appear to get lost as the context has been returned to the GUI before the second call.

My Barcode Promised Land

$
0
0
The effort of trial and error, traversing dead ends, and determining what I could not do, led me eventually to what I could. Let me start by saying that if I was a Siebel engineer (completely unaware of what constraints they had to work with) I would have provided an Application level method called something like BarcodeScan that could be trapped. I could then put a runtime event on it and trigger a wokflow when I was done. But then again I also would not have coded in the limitations I mentioned earlier.

Barring all that, I still needed a couple of basic things:
  • Hook to trigger additional functional logic
  • Do lookups on Serial Numbers
Additionally, it would be nice to:
  • Minimize the number of clicks
  • Do lookups on the child record of a BC
  • Parse the input so that I could do different stuff based on the type of data
Given those must-haves and nice-to-haves, I decided to hack the business service, trap the methods in question and just do my own thing. I should mention, that my initial approach was more from a wrapper perspective than a replace perspective. That is, I thought I could just trap the method, do my stuff, then continue with the vanilla method. Here is the problem though. Since everything that happens in the vanilla method threads occurs out of the GUI context, I cannot leverage any Active... methods. Therefore to do something as simple as update the record returned by the vanilla lookup, I would have to requery for it in my own objects to get it in focus to update it. Well if I am requerying for it, what is the point of doing the same query twice? I can just do my own query once in the Active object and then trigger my post events.

Let me start by walking through the most important Must-Have

Hook to trigger additional functional logic
I have sort of hinted at how this was achieved in general. Once I realized that the 'HTML FS Barcoding Tool Bar' was getting called, I modified the server script on this service to log when its methods are called. The important method here is 'ProcessData' which is the one method called regardless of the processing mode in use. At this point you have the barcode data and the Entry mode. You can also determine what view you are on via ActiveViewName. I trapped the Find, New and Update methods in the PreInvokeMethod event to store the current processing mode in profile attribute:
switch (MethodName) {
case "Find":
case "New":
case "Update":
TheApp.SetProfileAttr("BarcodeProcessMode", MethodName);
break;
}
With these three fields, the View, Process Mode, and Entry Mode, I can query the FS Barcode Mappings BC for a unique record.

boBCMappings = TheApp.GetBusObject("FS Barcode Mappings");
bcBCMappings = boBCMappings.GetBusComp("FS Barcode Mappings");
with (bcBCMappings) {
ClearToQuery();
SetViewMode(AllView);
ActivateField("Field");
ActivateField("Applet BC");
SetSearchSpec("View", sView);
SetSearchSpec("Entry Mode", sEntryMode);
SetSearchSpec("Process Mode", sProcessMode);
ExecuteQuery(ForwardOnly);
bFound = FirstRecord();

if (bFound) {
...
What I want to get from that record for now is the lookup field. I also need to know the Active BC to do the lookup in. Again, I cannot use ActiveBusComp or ActiveApplet so I just added a join to the FS Barcode Mappings BC to the repository S_APPLET table based on the applet name already stored in the Admin BC and added a joined field based on S_APPLET.BUSCOMP_NAME. I still feel like there is a better way to do it, but that is where I am at right now. Anyway, from the admin record I have a BC to instantiate, a field to set a search spec on, and the text value of the search spec.
sField = GetFieldValue("Field");
sBusComp = GetFieldValue("Applet BC");

boObject = TheApp.ActiveBusObject();
bcObject = boObject.GetBusComp(sBusComp);
with (bcObject) {
ClearToQuery();
SetViewMode(AllView);
ActivateField(sField);
SetSearchSpec(sField, sLogicalKey);
ExecuteQuery(ForwardOnly);
bFound = FirstRecord();

if (bFound) {
...
My client has multiple barcode processes so all this could be happening in different places. So the last step is to add some logic to branch out my hook. I am using the BC for now but we could make this more robust:
switch (sBusComp) {
case "Service Request":
ProcessSR();
break;

case "Asset Mgmt - Asset":
ProcessAsset();
break;
}

Common (or not) eScript Syntax Errors

$
0
0
I would love to post a comprehensive list of gotchas, but then that would make them not gotchas if you know what I mean as I would know them all. So instead, I will mention what sidelined me for several hours last night and hope to spur some discussion about what other people have come across. If I think of others over time, I will try to update this post.

Space after the function name. I had copied and pasted some functions from somewhere else in my client's repository and the functions had no space between the name and the opening parenthesis of the passed variable declarations. I was not (and I guess still am not) aware of a limitation in this regard, but I saw all sorts of strange behavior afterward. Namely, the calls to these functions seemed to be ignored which took me a long time to realize. They seem to work fine in their original home elsewhere in the repository so this may be related to context, but suffice to say this is some thing to think about when troubleshooting.

A Basic Interface - Building the Integration Object

$
0
0
I am not sure how easy it will be to summarize EAI in a couple of blog posts as there are definately a lot of ifs and buts in the design process. Nevertheless, I think it wold be useful to show how to build a basic interface using a couple of different techniques. Frequently you client's enterprise architecture will drive which to use.

Integration generally takes one of three forms
  • Query - Returns a data set of source data to be displayed in the target system
  • Schema Update - Takes a hierarchical data structure and applies it to the target system
  • Functional Action - Triggers a service to perform some set of business rules
There is perhaps some overlap here and any of these can be inbound or outbound to siebel, but this is a general way of categorizing your interfaces. And within each there are several different ways to implement more specific requirements.

Regardless of approach, the basic component of most interfaces is the structure of how data is viewed or applied. Let's say we need to Upsert a Service Request. A Schema Update assumes a hierarchical organization of data using the Integration Object data structure. Bookshelf provides extensive instruction on how these are built and configured to achieve certain goals so I will only touch on the highlights.

First, create an Integration Object in Siebel: from the Tools File Menu, New Object Wizard, EAI Tab, choose Integration Object. In the wizard, select the Project and choose 'EAI Siebel Wizard' from the second dropdown, and click Next. For the purpose of this example, we can just use the Service Request business object as the source object and the root BC will be Service Request. Enter a name of your choosing and click Next. In the next wizard page, deselect all child objects for which there are no fields to set. In this case that will be all of them except for the root as the more objects and fields in the message, the longer it will take the various architecture components to parse and translate the message. Click Next, then Finish on the next page.

Your Integration Object has been created. The next step is to verify the user keys. An integration object needs to have a valid user key in order to do an upsert. This basically specifies which key fields to use to find a record to update. In my example for Service Request, a key was not generated by the wizard so I will create one. Navigate to Integration Component Key in the explorer under the Service Request Integration Component. Create a new record, provide a name, set the sequence number to 1 and the key type to 'User key'. Create a child record in Integration Component Key Fields, provide a name and set the Field Name to 'Id'.

Another optional step we will use in this example is the Status Key. After creating a service request, I want to return the service request number to the external system as verification of success and so this SR can be referenced later by the customer. To do this we use the Status Key. This is basically a structure of the data set we wish to return from the EAI Siebel Adapter call and pass back to the calling system. A Status Key can be specified for each Integration Component so the final data set is the structure of all the keys combined hierarchically. In this case, navigate to Integration Component Key in the explorer under the Service Request Integration Component, create another new record, provide a name, 'StatusKey, set the sequence number to 1 and the key type to 'Status key'. Create a child record in Integration Component Key Fields, provide a name and set the Field Name to 'SR Number'.

Finally, while not absolutely necessary, you should inactivate all fields you are not using for each Integration Component. For an inbound upsert to Siebel, the calling system does not need to provide all the fields that are active in the IO schema, but if the field is active in the IO, then the external system Could send that data element which may have undesired affects depending on the interface. Make sure all fields used in the key are activated as well as all fields being passed from the external system. Unlike BC field lengths, the length property of an Integration Component Field is more important as when an XSD is generated and provided to the external system, this property will frequently be used by the web development tool to validate the data entered into that field. You can also change the XML Tag attribute to a label recognized by the external system (so long as spaces are removed).

One thing to keep in mind is that if an insert is desired, then the calling system should just pass a constant to the user key field, 'Id' so that Siebel will not find a record and a new one will be created. A value like '_New_Record_' is a safe value because the '_' will never be part of a generated row id.

A Basic Interface - Web Service Workflow

$
0
0
Just about every interface consists of two basic components: the integration object(s) and the workflow or business service. I will demonstrate a workflow approach which will give you more opportunity to customize down the road.

It is here that we begin to differentiate the integration by the communication mechanism. Because I am designating this integration as a Web Service, that will drive the type of data this workflow will expect as an input and output. The workflow I build will eventually be exposed as a WSDL to be consumed by an external program. That WSDL should have the definition of the message it is expecting, in this case, the XSD, or definition of the Integration Object we just built. How we accomplish this is to set the Input Process Property to a Data Type of 'Integration Object' and to actually specify the integration object we built, in the Integration Object attribute of the process property.


You can also see my place holder for the SR Number that I want to return to the external system in the response message. The 'IncomingXML' property is already in the format needed to be passed to the EAI Siebel Adapter, so there is no conversion necessary. And we are assuming that the data being passed is exactly as it should be applied. You will create the following steps which I will explain (other than Start and End which are self explanatory):
The 'Upsert SR' is a Business Service calling 'EAI Siebel Adapter'. Now here is the another design decision to be made. Each of the available methods differentiate exactly how the data should be applied. But there are two broad determinations. If we were to use the Execute method, then the 'operation' element which exists in each component of the IO would be used to determine how the data should be applied. This gives more control the calling system (or a data map which I will discuss later). The other set of methods essentially comprise a One Size Fit All to applying all the data uniformly. I will use the latter approach here and set the method to 'Upsert'. There is only one component in my IO, so if it exists, it will be updated, otherwise it will be inserted. The input arguments for this step are the IncomingXML message from the external system and a parameter telling the EAI Siebel Adapter to create the Status Object.

There is one Output Argument. We no longer care about the input message at this point because it will have been applied so we just overwrite it with the return, which in this case will be the status key.
The last step in the WF is another Business Service step calling 'PRM ANI Utility Service', 'GetProperty' method. This business service has a plethora of useful methods for manipulating property sets. This particular method will extract the value of a field from an integration object instance. Here are the inputs:
The output is to set the process property 'SRNumber' with the Output Argument, 'Property Value'. When the return message is sent back to the calling system, this property will exist with the generated SR Number.

Simulating/Troubleshooting this WF from within tools is difficult as built so I sometimes add a bypass step off the start branch to read the integration object from a file. I may talk about this later but want to keep this post pretty straightforward. So for now, this workflow can just be deployed, checked in and activated.

A Basic Interface - Integration Object User Props

$
0
0
I know this is meant to be a basic interface with so complexity, but let's be realistic about the requirements we are likely to get. Even a simple upsert of something as basic as a service request is likely to require a bit of digging into bookshelf so that the interface is able to mimic basic GUI functionality. I will discuss some of the most commonly used User Properties necessary to implement even an advanced interface. When in doubt about the syntax of any of these properties, take a look for an example in the Tools flat view.

PICKLIST

This is the most common Integration Component Field User Property you will see and it basically tells the EAI Siebel Adapter to validate the picklist in the interface. This property is generally created by the wizard so I bring it up only because validating the picklist here will allow for several different ways to interpret a picklist field value described by some of the user properties below.

PicklistUserKeys

In the GUI, when you type an account name in the Account field on another BC that has a picklist of Accounts, and there is more than one record matching that name (with different locations), a pick applet will pop open with the constrained list of accounts having that name. The GUI is letting a user decide which of the multiple records returned was meant to be picked. An interface does not have that luxury, so the PicklistUserKeys Integration Component Field User Property is provided to mimic this action. The value of this property should be a comma separated list of fields representing the logical key of the picklist record to look up. These fields must all be present in the integration component (though there values can be null). The 'PICKLIST' user property must also exist for the field where this property is used and its value must be 'Y'.

Ignore Bounded PickList

When a picklist is validated in the interface and the value passed is not found, the EAI Siebel Adapter stops processing and returns an error. If the data is expected to sometimes be missing though, you may want the foreign key to just be left blank. For instance, maybe the service request, in our example is tied to an order via a back office order number, but the order was never loaded. Add this user property with a value of 'Y' in combination with the PICKLIST user property with a value of 'Y'. The EAI Siebel Adapter will look up the record by the user key provided (can also be used in combination with PickListUserKeys) but if it is not found, will set the field to blank in the integration object before applying the data. Keep in mind that this property will only work as expected if the Picklist object the underlying BC uses to constrain the field is set to No Insert equals True, otherwise, the EAI Siebel Adapter will try to insert a record. Also note that in bookshelf there is a typo in that there should be spaces between the words of the property name.

FieldDependency

It is easy in the GUI to determine the order of the fields being picked, either by training or by sequencing the fields in a particular way during applet design. This may help set the fields that will be used to constrain the value of another field, frequently in a hierarchical picklist. In EAI, we achieve this result through this user property. It can be used multiple times with a sequence number, just like other BC and applet user properties. The value is a field integration component field name. Siebel claims that pickmapped constraints are automatically taken into account, and that may typically be the case, but I have seen times when it does not work, so this is a good fall back.

ADM - List Of Values

$
0
0
There are plenty of posts on support web discussing the issues with migrating LOVs, but for my own sanity, I thought I would summarize all of the relevant issues in one place.

First, we need to address the defects. These are documented on support in Document 731411.1 but I will summarize here:
(1) Go to BC 'List Of Values Child (UDA)'
(2) Add a new field 'Parent Type' based on Join 'Parent LOV' and Column 'TYPE' with Text Length '30'.
(3) Expand the pickmap for the 'Parent' field. Replace pickmap field 'Type' with 'Parent Type' and uncheck Constrain flg.
(4) Go to the integration object 'UDA List Of Values'
(5) Find the Integration component 'List Of Values Child (UDA)'
(6) Add a new field to the integration component with Name = 'Parent Type'. Data Type = 'DTYPE_TEXT', Length = '30', Type = 'Data', External Name = 'Parent Type', External Data Type = 'DTYPE_TEXT', External Length = '30', External Sequence = '38', XML Tag = 'ParentType'
(8) Compile changes.
The SR then goes into some more detail on why after all that it still does not quite work. To understand, we need to see that the LOV ADM Integration Object is Hierarchical in one dimension. That is, there is the LOV_TYPE record and then there are the value records. But LOVs are frequently Hierarchical in two dimensions, by virtue of the Parent value. What I mean is that a given LOV value record will always have one 'parent' record, it's type or technical parent, and may have a second parent record, it's functional parent, if you will.

ADM loads the first, technical parent in the standard way, through the relationships of the Integration Object. To load the functional parent though, ADM must run in two passes, the first to create all the parent and child records, and the second to relate them. This is necessary because we cannot guarantee the sequence with which LOV value records will be placed in the extract file. If these value records do not exist in the target already, and the parent is alphabetically (or however else we chose to sort the records) after the child, then it would error if ADM did not take this approach. So how ADM takes two passes is by virtue of the ADM Data Type explorer. You will notice that the explorer does not actually specify the foreign key fields of an object to link them to each other. Its only purpose is to run ADM in multiple passes. But the twist is that ADM will actually process dependent data types setup in the explorer in reverse order, importing the children before the parent. I personally find this confusing from a terminology perspective. Perhaps a better way of naming these Data Types is to use 'LOV-2ndPass' instead of 'LOV-HierParent' and 'LOV-1stPass' instead of 'LOV-HierChild'. This way when we set up the search specifications for an ADM Export session, it is clear what we are trying to do.

OK, one more wrinkle to throw into the mix (just when you thought it was all making sense). There is actually a third parent relationship involved. That is the records that populate the S_LOV_REL table. I will be honest; I do not use the LOV explorer view that often and I don't really know what the point of this table is. In theory it can make LOVs M:M but I just don't think this is practical. Nevertheless, there are some vanilla uses of LOVs where these records are in fact used that way. The one that comes to mind is in payments, where the PAYMENT_TYPE_CODE values are children of the PAYMENT_METHOD_CODE and there are S_LOV_REL records created to store the relationships. The same issue applies when migrating these relationships. The related value must exist prior to the relationship being built.

One final note. I think the whole not deleting LOVs is well intended but more likely to cause confusion than solve anything. Here is why. Users can and will just change the Name/Value of a value record to something else in which case any sense of history is lost anyway. There are no foreign key relationships to LOVs so business data using these values is unaffected regardless. But others may disagree so this step is completely optional. I remove the no delete properties from the 'List Of Values Child (UDA)' BC and Integration Component. (I also allow deletes from the GUI but that is a separate issue). So my migration methodology is to synchronize values between environments for an initial release. You would take a different approach on a point release where values are likely to have been added directly to production and therefore may not exist in your DEV and TEST environments.

Anyway, what are we trying to do. Quite simply, we are trying to create all the Value records in pass 1, then we need to relate them to each other in Pass 2. I have already discussed how to group LOVs together for a release. This is where I diverge from Siebel's example because I am trying to think of real life scenarios where I am deploying releases, not just one LOV_TYPE. When creating the ADM Project/Session, here are the session items I use:


Data TypeChild DeleteDeployment Filter
LOV-2ndPassY[Release] = '1.1'
LOV-1stPassN[Release] = '1.1' AND [List Of Values Relationship.Name] IS NULL

What this means is that the first pass includes all LOV_TYPE records that have been marked for this release and all LOV value records related to them. The second part of the expression basically just insures that no relationship records are included in the first pass. When ADM attempts to set the parent value on a child, it may not be able to find it so it will log a warning and move on. In the second pass, ADM will load all the Relationship records and set the Parent values that it missed on the first pass. I have also set child delete to true on the second pass so that this job effectively synchronizes the value records for the type records marked.

BI - Upload Limitation

$
0
0
I have recently been designated the BI technical resource on my project so am looking at the BI capabilities (on 7.8) for the first time. Despite a fairly complicated and mistake laden patch upgrade which I do not even want to get into, it is a pretty powerful tool, much better architected than Actuate. Anyway, there are also some pretty glaring limitations as well on how it is administered that require so little effort to fix, I decided to just go ahead and fix them.

My main beef is that the architecture requires your BI report developer to have access to both the BI file system and the Siebel Server file system. I suppose you could set this up in a way that minimizes security risk, but it just seems so unnecessary. Essentially, to upload a new BI Report Template, the developer creates a record in the BI Report Template administration view, attaches the two template files (an RTF and an XLF) and clicks the upload button. So far, so good. The problem is that these template files must also exist in a specific place in the Siebel Server file system as well to generate a report. But the code behind that button does not take the extra step to just copy the files to where they need to go. Also, there is an existing product defect where modifications to an existing report record require the developer to go into the BI File system and delete the entire directory containing that report template. So that is where I step in.

First I added two new System Parameters indicating the locations of the BI and Siebel home directories. There is a way to grab environment variables through script but I did not feel like investigating this so let's call that phase II. For example, here are my two:


NameValue
BIHomeDirectoryE:\OraHome
SiebelHomeDirectoryE:\sea78\siebsrvr


Then, we need to trap the call to upload the templates file. This call is performed from 'Report Template BC' by the 'Upload' method. We need to always delete the directory before this upload is called. We also want to delete the existing template file from the Siebel server file system. Here is a script to place in the PreInvoke method to accomplish that (there are also some references to the Log and Frame objects):

switch (MethodName) {
case "Upload":
try {
Log.StartStack("Business Component", this.Name()
+".PreInvoke", MethodName, 1);
this.WriteRecord();
var sReturn, sCommand;
var sSiebel = Frame.GetSysPref("SiebelHomeDirectory")
+"\\XMLP\\TEMPLATES";
var sPath = Frame.GetSysPref("BIHomeDirectory");
var sFile = this.GetFieldValue("ReportTmplFileName")
+"."+this.GetFieldValue("ReportTmplFileExt");

sPath = sPath
+"\\XMLP\\XMLP\\Reports\\SiebelCRMReports\\"
+this.GetFieldValue("Report Name");
Log.stepVars("BI Report Path", sPath, 3);

sCommand = 'rmdir "'+sPath+'" /S /Q';
sReturn = Clib.system(sCommand);
Log.stepVars("Command",sCommand,"Success?",sReturn,3);

sCommand = 'del "'+sSiebel+'\\'+sFile+'"';
sReturn = Clib.system(sCommand);
Log.stepVars("Command",sCommand,"Success?",sReturn,3);
} catch(e) {
Log.RaiseError(e);
} finally {
Log.Unstack("", 1);
}
break;
}
return (ContinueOperation);
Ok. That addresses the product defect for updates. Now the second part is to copy these template files to the Siebel server file system once the template files are uploaded. The following script can be added to the InvokeMethod event:

switch (MethodName) {
case "Upload":
try {
Log.StartStack("Business Component", this.Name()
+".Invoke", MethodName, 1);
var sReturn, sCommand;

var sSiebel = Frame.GetSysPref("SiebelHomeDirectory")+
"\\XMLP\\TEMPLATES";
var sPath = Frame.GetSysPref("BIHomeDirectory");
var sFile = this.GetFieldValue("ReportTmplFileName")
+"."+this.GetFieldValue("ReportTmplFileExt");

sPath = sPath
+"\\XMLP\\XMLP\\Reports\\SiebelCRMReports\\"
+this.GetFieldValue("Report Name");
Log.stepVars("Source Path",sPath,"Target Path",
sSiebel,"File to copy",sFile, 3);
sCommand = 'copy "'+sPath+'\\'+sFile+'" "'+sSiebel
+'\\'+sFile+'"';
sReturn = Clib.system(sCommand);
Log.stepVars("Command",sCommand,"Success?",sReturn,3);
} catch(e) {
Log.RaiseError(e);
} finally {
Log.Unstack("", 1);
}
break;
}

And there you go.

Building a BI Developer's SuperView

$
0
0
Another limitation I find irritating when it comes to building BI templates is how basic the sample file generator is. My main beef is that it just takes the first 10 records in the Integration Object and spits them out. If you have a complicated IO with child ICs, it is possible, and even likely that those first ten records do not have the child detail records you need to test your report output. There are some ways around this, like hard coding a search spec on the BC against a thick client partial compile to generate a file with data you want, but that seems so inelegant. My other tick regarding this feature is that the report developer once again either needs to have a Siebel thick client or access to the Siebel Server file system to actually get the xml file produced. It seems like the whole point of all the BI Administration views is to avoid having to go to the file system. What to do...

Caveat Emptor. Configuration steps below are to give you an idea. I am posting this after I finished to highlight what I recall as the important pieces so not every step is included. You will need to create the new custom table CX_TMPL (or use another), create all links, applets, view objects, make BO/Screen changes and deploy them.

First I build a view with the same IO BC based applet as the vanilla view on top, and child applets for both attachments and a new object which is essentially a search spec. First the attachment bc. This is a new BC which you can copy from an existing attachment BC and change the names around. Here is mine, called 'Sample IO Attachment' based on S_FILE_ATT. Use the field name prefix 'Sample' instead of which ever prefix is used on the BC you are copying (Be sure to set the User Property DefaultPrefix to 'Sample' too):
NameJoinColumnForce ActivePredefault ValueText LengthType
IO Id
PAR_ROW_ID
Y
Parent: 'Repository Integration Object.Id'15DTYPE_ID
Parent KeyX_PARENT_KEY
Parent: 'Repository Integration Object.Name'100DTYPE_TEXT


The Search Spec applet is based on a custom BC, 'Report IO Sample File Template', based on the new table, CX_TMPL (I use this table for other things too so I type spec each record):
NameJoinColumnForce ActivePredefault ValueText LengthType
Name
NAME
Field: "Id"100DTYPE_TEXT
Parent IdS_INT_OBJROW_IDYParent: "Repository Integration Object.Id"15DTYPE_ID
Parent Name
PARENT_FLD
Parent: "Repository Integration Object.Name"50DTYPE_TEXT
Search Specification
CONSTRAINTY
250DTYPE_TEXT
Type
TYPE
SAMPLE_IO_CONSTRAINT30DTYPE_TEXT
Number of Records
LN_NUM10
DTYPE_INTEGER


The join, S_INT_OBJ, is based on the specification of Parent Name = NAME. Using name instead of Id allows the search specs to remain visible after repository moves.

You will also need the following Named Method User Property:

"GenerateConstrainedData", "INVOKESVC", "Report IO Sample File Template", "Workflow Process Manager", "RunProcess", "'ProcessName'", "'Export Sample IO To File'", "SearchConstraint", "[Search Specification]", "'IOName'", "[Parent Name]", "Path", "'..\XMLP\Data'", "Object Id", "[Parent Id]", "PageSize", "[Number of Records]"

This user property is to activate the button you will need to place on the applet based on this BC. On that applet (based on class CSSSWEFrameListBase), add a button which invokes the method 'GenerateConstrainedData'. No additional script should be needed there.

Create a Service Flow Workflow Process called 'Export Sample IO To File'


Here are the Process Properties:

NameIn/OutData Type
FileNameInString
IONameInString
PageSizeInString
PathInString
SearchConstraintInString
SiebelMessageNoneHierarchy
ViewModeInString


The first 'Echo' step is a Business Service based on Workflow Utilities, Echo method. This step sets up all the variables used later in the process. Here are the arguments:

I/OArgumentTypeValue/Property Name
InputIONameProcess PropertyIOName
InputPageSizeProcess PropertyPageSize
InputPathProcess PropertyPath
InputSearchConstraintProcess PropertySearchConstraint
InputViewModeProcess PropertyViewMode
OutputFileNameExpressionIIF([&FileName] is not null, [&FileName], [&IOName])


The next 'Export IO' step is a Business Service based on EAI Siebel Adapter, QueryPage method. This step queries the integration object. Here are the arguments:

I/OArgumentTypeValue/Property Name
InputOutputIntObjectNameProcess PropertyIOName
InputPageSizeProcess PropertyPageSize
InputSearchSpecProcess PropertySearchConstraint
InputViewModeProcess PropertyViewMode
OutputSiebelMessageOutput ArgumentSiebelMessage


The next 'Write to File' step is the Business Service, EAI XML Write to File, WriteEAIMsg method. This step writes the property set out as an XML document to the file system. Here are the arguments:

I/OArgumentTypeValue/Property Name
InputFileNameExpression[&Path]+"\"+[&Process Instance Id]+"_"+[&FileName]+".xml"
InputSiebelMessageProcess PropertySiebelMessage


The final 'Attach' step is another Business Service, this one custom. The basic logic here is to add an Attachment to the file system which is first described in Oracle document 477534.1 (I have made some improvements which I will perhaps discuss another day). Here are the arguments:

I/OArgumentTypeValue/Property Name
InputAttBusinessComponentLiteralSample IO Attachment
InputAttachmentFieldNameLiteralSampleFileName
InputBusinessObjectLiteralRepository Integration Object
InputFileExpression[&Path]+"\"+[&Process Instance Id]+"_"+[&FileName]+".xml"
InputObjectIdProcess PropertyObject Id
InputPrimaryBusinessComponentLiteralRepository Integration Object

Tools Bleg

$
0
0
Don't get me wrong. I love Siebel Tools. Compared to other enterprise systems where development for the most part involves modifying script, Siebel has a very elegant development platform. OK, all that being said, after developing in Tools for over eleven years (odd writing that), there are some things I would love to do better to make my development experience more efficient. So to that end I thought I would put some thoughts out into the cloud to see if anyone has thought of a workaround for any of these items:
  • Column Preferences. Is it just me or does the Tools client not save preferences the way the Siebel client does. Rearranging columns usually works, but changing widths do not seem to save.
  • PDQs. The idea of Bookmarks is nice but I hate the fact that drilling down or using them loses the context of my exporer pane when I go back. PDQs on every object like within the Siebel Client (and the ability to set default PDQs for each view) would do wonders.
  • Drilldowns. Speaking of drilldowns, is it really necessary for drilling down to collapse the rest of my explorer pane, hence refreshing all the queries on other objects?
  • Expose Tab Order on Applets. I am tempted to try this one out myself one day because it seems doable. Who knows.
  • Applet Wizard. Not for creating a new one. That is ok. But to synchronize with a BC down the road when I want to add a new column. A wizard would just be a much easier way to add a new column rather than adding a control or list column, then adding it to the web template.
  • Allow sync of Meta Data needed by Tools without Remote Sync. This might be a bit more out there but I find it annoying that Users (Help about record) and LOVs cannot be synced with a 'Get'. I know you can get them with a remote sync, but more and more, a lot of client's do not use Remote or use it so infrequently that it is not emphasized and it is a pain to keep my remote client in sync with the server in a development environment anyway. This might sound minor, but like I said, it annoys me.
I have mainly limited this list to just applying functionality that already exists in the Siebel Client or to exposing data which I am pretty sure is there to be exposed. Not really trying to create a forum for adding "New" features. I may add to this list in the future, but feel free to add your own wishes/solutions in comments.

eScript Framework on 8.1

$
0
0
Converting the eScript framework to 8.1 proved a bit troublesome for me as the Siebel strong type engine has apparently dropped support for prototyping Siebel objects, such as a Business Service.  This makes the implementation a bit less clean since without being able to declare a prototype of the Log or Frame objects on application start, we are left with having to have every framework function be a child of the Application object.  This being the case, I consolidated the Frame and Log objects from the 7.8 framework into a single Utility object since there was not as much advantage in separating them.  Instead of the elegant 7.8 calls:


Log.Stack("MyMethod",1);
Log.Step("Log the time as "+Frame.Timestamp("DateTimeMilli"),3);
Log.Vars("VariableName", varValue,3)
Log.Unstack("",1);

we instead do this:


TheApplication().logStack("Write",this)
TheApplication().Utility.logStep("Log the time as "+
    TheApplication().Utility.Timestamp("DateTimeMilli"));
TheApplication().Utility.logVars("VariableName", varValue)
TheApplication().Unstack("");

Oh well.  To mitigate this somewhat, I have added a number of enhancements since the initial series of posts, which I will try to discuss sometime soon.
  • Automatically tie log level to the function doing the logging (Stack/Unstack vs variables for instance), hence no need for the numeric last parameter to all logging functions (though it is still optional as an override)
  • Added support for unix file systems
  • Standardize the identification of logging record Ids (by passing the 'this' reference it will append the row id for methods with Write, Delete and Invoke in the name)
To implement the basic framework in 8.1, you need something like this in the Application Start event:
        this.Utility = TheApplication().GetService("ETAR Utilities");
        this.Utility.Init();

Here is the Declarations section:


var gsOutPutFileName;
var gsFileName;
var gsLogMode;
var giIndent = 2; //Indent child prop sets this many spaces to the right for each level down.
var giPSDepth = 0; // How deep in the property set tree, what level
var gaFunctionStack = new Array(); //used in debugStack function to store called functions
var giStackIndex = 0; //Where in the function stack the current function resides
var gsIndent = ''; //used in debug methods to identify stack indents
var giLogBuffer = 0;
var giLogLines = 0;
var gsLogPath = "";
var gsLogCache = "";
var gsLogSession = "";
var giErrorStack = 0;
var ge = new Object();
var gStack = new Object();
var gCurrentLogLvl;

The Utilities business service is a cached service in tools.  It's Init function looks like this:


giErrorStack = 0;
ExtendObjects();
gsLogMode = GetSysPref("Framework Log Mode");
gsLogMode = (gsLogMode == "" ? "FILE" : gsLogMode.toUpperCase());
gsLogSession = TimeStamp("DateTimeMilli");

if (TheApplication().GetProfileAttr("ETAR User Log Level") != "")
    gCurrentLogLvl = TheApplication().GetProfileAttr("ETAR User Log Level");
else gCurrentLogLvl = GetSysPref("CurrentLogLevel");
giLogBuffer = GetSysPref("Log Buffer");
gsLogPath = GetSysPref("Framework Log Path");
try {
     var os;
     os = Clib.getenv("OS");
} catch(e) { os = "UNIX Based"; }
try {
   gsFileName = "Trace-"+TheApplication().LoginName()+"-"+gsLogSession+".txt"
  //A Windows OS indicates a thick client. Assume the path is the dynamicly
  //determined Siebel_Home\Log directory, or ..\log
  if (os.substring(0, 7) == "Windows") {
//  gsLogPath = gsLogPath.replace(/\\$/, "");  //Remove trailing backslash if used
//  gsLogPath = gsLogPath.replace(/\x47/, "\\");  //switch invalid OS directory seperators
    gsLogPath = "..\\Log\\";
    gsOutPutFileName = gsLogPath+gsFileName;
  } else {
    gsLogPath = gsLogPath.replace(/\x47$/, "");  //Remove trailing backslash if used
    gsLogPath = gsLogPath.replace(/\\/, "/");  //switch invalid OS directory seperators
    gsLogPath = gsLogPath+"/";
    gsOutPutFileName = gsLogPath+gsFileName;
 }
} catch(e) {
  gsLogPath = "";
  gsOutPutFileName = gsFileName;
}

Performance Tuning Methodology

$
0
0
I recently had an opportunity to do a bit of performance tuning on a newly deployed production App and thought I would share a methodology for tackling some of the low hanging fruit, sort of the 80/20 rule of siebel performance tuning.  My experience is that with Siebel 7.8 and higher, on Oracle 10 and higher, most performance issues are Siebel configuration issues.  Of those, most of the issues fall into one of two categories:
  • Missing Indexes
  • Sort Specs
When customizing Siebel, you will frequently create new relationships between logical objects via a new foreign key.  There should always be a corresponding index for that foreign key on the M side of the 1:M or M:M  linked table.  Typically, it is just a single Id column but if for some reason, there are multiple columns (perhaps a join spec and a join constraint) make sure all of the columns from the child table are part of the index.

Be aware that all the perfectly planned indexes in the world will frequently be ignored if there is a sort spec on a query.  The sort essentially takes precedence and any index that optimizes the sort will usually be used to the exclusion of other indexes that perhaps optimize what a user is doing on that view.  I frequently see performance problems on visibility views (All/All Across) without any query refinement at all.  When this occurs, it is usually because of the All Mode Sort user property settings.  If you are seeing performance problems on an All view, try changing the settings of that property for the BC to fix the issue.

Here is a general methodology for identifying and fixing performance issues.

  • For the OM component having the issue, change the Server Configuration event logging level to 4 for these events:
    • SQL Parse and Execute
    • Object Manager SQL Log
  • Execute the operation that performs the slow query
  • Open the corresponding OM log and find the SQL statement representing the slow query
  • Finding the statement can be done in a couple of ways, but I use the following:
    • Query for this string in the log '***** SQL Statement Execute Time'
    • Look at the seconds for this line and the subsequent '***** SQL Statement Initial Fetch Time' to see a large value
  • Copy the preceeding SQL statement into a SQL editor such as Toad or Benthic or whatever your fancy, swapping out the bind variables
  • Run an explain plan on the statement
    • Look for a line that says Table Access Full.  If you see such a line, look at the table being accessed this way and look back at the where clause to see how the SQL statement is joining to that table.  Then look in tools to see if there is an index for that table on the columns in the join to that table.
    • If indexes are not an issue, but there is an Order By in the SQL statement, try commenting out the Order By and rerunning the explain plan to see how it changes.  If you see the explain plan change significantly (Cost goes down) than confirm that you really need the sort spec in the particular scenario you are in.
This is really just meant to be a way to find low hanging fruit performance issues.  It is important to configure with performance in mind (especially when using script or workflow algoritms).  Other sources of performance bottlenecks include (but are not limited to):
  • Synchonous Interface round trips
  • File System operations
  • Network bandwidth  (especially if using a VPN)
  • Memory or CPU bound servers

Viewing all 37 articles
Browse latest View live