Server Components – Generated Code and MetaData (Table) Driven

<RANT>

While looking through the server-side representation of data objects for a middleware server, I noticed something that really set me off…
When my simple components (CRUD + custom-operations) were deployed to the middleware server, the marshaling code was all generated Java code.

What struck me was the presence of a TON of generated repetitive code – and all I could think was – why didn’t they use a table??

What I was thinking – for each data value-object that needed to be marshaled to/from the server – can’t it be represented as attributes in a table (one entry for each parameter) which is then iterated by the marshaling engine.  The engine would evaluate each parameter type and size, do the appropriate marshaling, then go onto the next parameter.
The advantage of a marshaling table:

  1. One place to focus any maintenance – the engine.
    Fix a bug in the engine and all legacy components will benefit from the fix.
  2. One place to focus any performance optimizations – the engine.
    Performance fix here will benefit all components.
  3. Compact representation – the only thing which grows with the number/complexity of the components is the table itself.
  4. Both future-proofing and easier support for legacy components.
    With a version tag in the table in the component repository, a newer version of the engine with new features will have a chance to ‘drop back’ when encountering an old component.

Anyways – that design team had an architect or implementer that liked code generators.
This guy likes being table driven as much as possible.

BTW – the CORBA and COM support within the PowerBuilder runtime is all table driven with a single marshaling engine (which I wrote AGES ago).

</RANT>

API Frameworks vs. Code Generators

I have recently used two of the ‘modern’ approaches to application development:
a) writing my code against an API framework
b) specifying my entities and some behavior, then pressing a button on a code generator

For a non-trivial application, I have to say that I prefer having a robust API framework coupled with various tool helpers over the code generators.

My two recent use cases – both are complex development environments (IDEs).
One was written using the Microsoft Visual Studio Extensibility (VSX) universe. This is a rich and barely documented API into the framework that is used to create Visual Studio itself.  This has a steep learning curve, but we (the developers of our tool) ended up learning what we needed. We were sometimes frustrated because an API would fall into Microsoft proprietary code, but generally it behaved “as we expected”.

The other product is also a complex programming environment, but based on Eclipse. The original team chose to use the “Eclipse Modeling Framework” (EMF) with its code generators. The development workflow was to define our persistent objects and their relationships in an Eclipse tool. We then had the EMF generate the class definitions, factory objects, and relationships as Java.
It’s interesting that similar to Visual Studio internals, the open source Eclipse also has poor documentation.  However, while the Visual Studio newsgroups were pretty helpful, the Eclipse (and Java) newsgroups are often condescending and parochial.

The EMF (and its graphical sibling ‘GMF’ which we also used) got an initial designer up and running lickety split.  It does have a lot of benefits for simple and demonstration projects.  However… we are tasked with adding lots of detailed functionality, behaviors, and even product branding.
This is when the fun started…

With a code generator approach, there are is a METRIC TON of code generated, and a lot of duplicated and almost-duplicated code.  That’s OK for toys, but when we need to start adding behavior that cannot be expressed in the Eclipse EMF designer, well, we ended up adding a lot more duplicated code (like a cross-cutting aspect – well – not fun…).  This was compounded by the imperfect RE-generation process.

By re-generation – consider the use case where you change the basic EMF model – usually adding classes and properties.  Then you trigger the code regeneration process.
I have to give the Eclipse guys credit, they do a pretty good job of keeping our other changes to the Java files in place.  However, the oodles of files in the code-generation process means we need to both check the generated code in many places, and copy/paste our customizations into all the new files.

As an example – when we added a user visible type (like a new control for the tool palette – it’s really a type with manifestations like a tool pallet properties, instantiated type property sheet, etc).

In the API Framework approach – it is a manual process where we create (derive) the classes for the the new type then add the class specifics into a table.  The framework reads the table and the new type (or control) appears on the tool palette and is hooked into the system.  You then customize (the real objective of adding the type).
Writing down the procedure and the files and expectations are pretty straight forward and the other developers had no problem doing this.

In the Eclipse EMF code generation approach – we edit the model in the Eclipse tool (which is saved to an XML file).  In the modeling tool we define the new type (control), its properties, its relationship to the other types – IOW everything you would expect in a modeling tool – and all is good here – this is what I want.
The difficulty comes in the Java code re-generation phase.  The changes from the EMF model both creates new files (like for the new types) and various other files have various other changes merged into them.
The generation and merge process is pleasing in that it works as well as it does, but there are still quite a few files that need to be verified/changed/examined.  This particularly tweaked me because a new version of Eclipse has the generator using a slightly different placement of braces and indenting.  Hence running a directory-wide code compare is basically useless (even a flexible tool like ‘Beyond Compare’ flags a change when next-line braces changed to the same-line flavor…), to say nothing about the generator now using an underscore a little bit more often than before…

In short, when creating a non-trivial application I would rather have a steeper learning curve than having to deal with on-going pain.

Remote Debug the Server Side ResultSetFilters and ResultCheckers

Somebody on the internal newsgroup asked how to use the Eclipse remote debug capability to debug custom ResultSetFilters and ResultCheckers – that are running on the SUP server itself.

Somebody internally beat me to the punch, but it is described in:
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.sup.doc-SUP-1.0.1/projects/sup/tooling/en/source/t_mbo_writing_result_set_filter.html

Like I said, this is basically using standard Eclipse remote debugging, after the Unwired Server has been restarted in JPDA (Java Platform Debugger Architecture) mode.  You can set breakpoints and party in the code (using the brain-damaged Java debugger, but better than System.out.println()….).

(Update 19-July-2012)
On the internal newsgroup, one of the server people posted this helping bit:

You should be able to debug your filter running within Workspace by adding the same debugging JVM flags (all one line):

-Xdebug -Xnoagent
-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005

to UnwiredWorkspace.bat.

I believe you might need to use a different instance of Eclipse to run the debugger. I’ve seen some pretty strange behavior using the Eclipse debugger to attach to itself.

SUP – ‘Result Checker’ and ‘Result Set Filter’ Customizations

There are two server-side customizations you can put into your Mobile Business Objects (MBOs).  You write these customizations in Java and deploy them with the MBO onto the server.
The checking & filtering are totally transparent to the consumer of the MBO.

Result Checker

The ‘result checker’ allows custom error checking for the MBO operations to occur on the server.  Because these are written in Java, they have a rich set of capability.  The main aspect to remember (that led to a customer asking the question) is that these result checkers execute on the server, and provide status information for the application running on the end-user’s device.
The online doc is (here).

Result Set Filter

Similar to the result checker, the ‘result set filter’ allows data to be manipulated (only during the read) before it is fed into the storage/cache for the MBO.  This allows simple things like combining the first and last names into the single field “name”, or removing rows that do not fulfill some other requirement that is not in the BAPI definition.
Another nice feature of the result set ‘filter’ is that you can use multiple data sources, like a couple of SQL tables along with your SAP BAPIs to ‘join’ then into a single MBO result set.
The online doc is (here).

XBW Designer Feature – “Arrange All”

I just fell across this feature in the SUP Hybrid Web Container designer (the XBW file designer).  It is Arrange All – I know we get this for free from GMF/GEF and it has been there for ages, but I never noticed it.  This solves one of those items I had on my whiteboard to investigate and address (in my copious spare time…).

Basically, right-mouse on the design surface and choose “Arrange All” in the context menu.  There is a smooth slide where the screen nodes move into (usually) pleasing positions.

First – the original flow designer….

XBW Designer Layout - Before

And – after the “Arrange All” action.

Layout Designer After ArrangeAll

Layout Designer After Arrange All