Linksys WRT1200AC as an Access Point

Because of a nearby lightning strike we had to replace a lot of hardware on the home network (in spite of battery backups and surge protectors and lightning bleeders). This included our old reliable WiFi access point.
I was already thinking of replacing the access point, wanting 5GHz coverage and the future of blowing in a new OpenWRT if needed (I have an ancient LinkSys 54G, which I did that with already, use it for occasional bridging needs). Anyways, the unscheduled $150 purchase and I’m adding the “Linksys WRT1200AC Dual-Band WiFi Router” to the home network. I selected this device because it was in-stock at BestBuy, no internal fan, and I’ve seen good reviews for it.
And – the case styling – has a retro look like the old WRT54G which I just love.
And – the antennas are removable so I can use my 2-foot long antennas for whole house coverage. I considered the WRT1900, but decided not to spend the extra money.
As you’d expect, our network is non-trivial and I wanted just a WiFi access point, and I wanted it to have a static IP, and nothing else.

But it took me many tries to disable the built-in DHCP and its built-in NAT functions. There was even a setup screen with a radio-button for NAT – but it could not be turned off. That radio-button was laughing at me – just the one radio-button with no way to turn it off.
The DHCP was similar but worthless – a checkbox to enable/disable the DHCP server. But – disabling the DHCP basically bricked the device and I had to go back to a factory reset.

I was stymied – how to disable the fancy router/NAT functionality and get a dumb but fast switch and access-point.

In one of the screens – there is the “Connection Type” drop-down. It was set to “Automatic Configuration – DHCP”. I tried “Static IP” a few times no luck, but then I tried “Bridge Mode” (which is somehow different from “Wireless Bridge Mode”). To me “Bridge Mode” is the same as “Wireless Bridge Mode” – which (in my book) is connecting two wireless access points as a bridge between two sub-nets. However, the net effect in this case is that the “router” turned into an Access Point. In “Bridge Mode” the DHCP and NAT options magically disappeared and the screens slimmed down to what an access point should have.

In other words – “Bridge Mode” disables the fancy features and gives you a fast and reliable WiFi Access Point with some added ports on the back if you need them.  I am now quite happy with my “Linksys WRT1200AC”.

Server Components – Generated Code and MetaData (Table) Driven

<RANT>

While looking through the server-side representation of data objects for a middleware server, I noticed something that really set me off…
When my simple components (CRUD + custom-operations) were deployed to the middleware server, the marshaling code was all generated Java code.

What struck me was the presence of a TON of generated repetitive code – and all I could think was – why didn’t they use a table??

What I was thinking – for each data value-object that needed to be marshaled to/from the server – can’t it be represented as attributes in a table (one entry for each parameter) which is then iterated by the marshaling engine.  The engine would evaluate each parameter type and size, do the appropriate marshaling, then go onto the next parameter.
The advantage of a marshaling table:

  1. One place to focus any maintenance – the engine.
    Fix a bug in the engine and all legacy components will benefit from the fix.
  2. One place to focus any performance optimizations – the engine.
    Performance fix here will benefit all components.
  3. Compact representation – the only thing which grows with the number/complexity of the components is the table itself.
  4. Both future-proofing and easier support for legacy components.
    With a version tag in the table in the component repository, a newer version of the engine with new features will have a chance to ‘drop back’ when encountering an old component.

Anyways – that design team had an architect or implementer that liked code generators.
This guy likes being table driven as much as possible.

BTW – the CORBA and COM support within the PowerBuilder runtime is all table driven with a single marshaling engine (which I wrote AGES ago).

</RANT>

API Frameworks vs. Code Generators

I have recently used two of the ‘modern’ approaches to application development:
a) writing my code against an API framework
b) specifying my entities and some behavior, then pressing a button on a code generator

For a non-trivial application, I have to say that I prefer having a robust API framework coupled with various tool helpers over the code generators.

My two recent use cases – both are complex development environments (IDEs).
One was written using the Microsoft Visual Studio Extensibility (VSX) universe. This is a rich and barely documented API into the framework that is used to create Visual Studio itself.  This has a steep learning curve, but we (the developers of our tool) ended up learning what we needed. We were sometimes frustrated because an API would fall into Microsoft proprietary code, but generally it behaved “as we expected”.

The other product is also a complex programming environment, but based on Eclipse. The original team chose to use the “Eclipse Modeling Framework” (EMF) with its code generators. The development workflow was to define our persistent objects and their relationships in an Eclipse tool. We then had the EMF generate the class definitions, factory objects, and relationships as Java.
It’s interesting that similar to Visual Studio internals, the open source Eclipse also has poor documentation.  However, while the Visual Studio newsgroups were pretty helpful, the Eclipse (and Java) newsgroups are often condescending and parochial.

The EMF (and its graphical sibling ‘GMF’ which we also used) got an initial designer up and running lickety split.  It does have a lot of benefits for simple and demonstration projects.  However… we are tasked with adding lots of detailed functionality, behaviors, and even product branding.
This is when the fun started…

With a code generator approach, there are is a METRIC TON of code generated, and a lot of duplicated and almost-duplicated code.  That’s OK for toys, but when we need to start adding behavior that cannot be expressed in the Eclipse EMF designer, well, we ended up adding a lot more duplicated code (like a cross-cutting aspect – well – not fun…).  This was compounded by the imperfect RE-generation process.

By re-generation – consider the use case where you change the basic EMF model – usually adding classes and properties.  Then you trigger the code regeneration process.
I have to give the Eclipse guys credit, they do a pretty good job of keeping our other changes to the Java files in place.  However, the oodles of files in the code-generation process means we need to both check the generated code in many places, and copy/paste our customizations into all the new files.

As an example – when we added a user visible type (like a new control for the tool palette – it’s really a type with manifestations like a tool pallet properties, instantiated type property sheet, etc).

In the API Framework approach – it is a manual process where we create (derive) the classes for the the new type then add the class specifics into a table.  The framework reads the table and the new type (or control) appears on the tool palette and is hooked into the system.  You then customize (the real objective of adding the type).
Writing down the procedure and the files and expectations are pretty straight forward and the other developers had no problem doing this.

In the Eclipse EMF code generation approach – we edit the model in the Eclipse tool (which is saved to an XML file).  In the modeling tool we define the new type (control), its properties, its relationship to the other types – IOW everything you would expect in a modeling tool – and all is good here – this is what I want.
The difficulty comes in the Java code re-generation phase.  The changes from the EMF model both creates new files (like for the new types) and various other files have various other changes merged into them.
The generation and merge process is pleasing in that it works as well as it does, but there are still quite a few files that need to be verified/changed/examined.  This particularly tweaked me because a new version of Eclipse has the generator using a slightly different placement of braces and indenting.  Hence running a directory-wide code compare is basically useless (even a flexible tool like ‘Beyond Compare’ flags a change when next-line braces changed to the same-line flavor…), to say nothing about the generator now using an underscore a little bit more often than before…

In short, when creating a non-trivial application I would rather have a steeper learning curve than having to deal with on-going pain.

More Proof that ClearCase Is A Toy – Not For Production Use

Yet more reasons to avoid ClearCase….

Scenario – I checked out a bunch of files on Friday, worked them for a few days adding all sorts of yummy goodness.  After code reviews I start checking things in.

One file has been updated by somebody else – no problemo – happens all the time on a large project with many programmers.  But wait – who is this “Reed Shilts” guy – somebody with MY name has checked in a new version of the file I have been editing ?
Looking through the CC history, nope – no new version has been checked in by Reed…

Since nobody here trusts the “Merge” capability in ClearCase (it sometimes works…) – I un-check out the file, re-merge my changes, then check it in.

ClearCase being confused again

If You Like ClearCase – You Belong in a Mental Institution

Watching a Google presentation by Linus Torvalds on GIT.
http://www.youtube.com/watch?v=4XpnKHJAok8

He disparaged CVS and other SCC systems, but since CVS is a step up from ClearCase, well – his comments are in spades…
BTW – Google uses Perforce…

Another person talking about ClearCase and why it is a tool from the 1970’s.
http://www.aldana-online.de/2009/03/19/reasons-why-you-should-stay-away-from-clearcase/

BTW – what other “tool” requires a dedicated administrator?
I administer a Perforce server used by 2 development groups (42 people).  It consumes about 5 minutes a month of my time.  It’s straight-forward and obvious – what more can I ask for.

Someone with a love/hate relationship with ClearCase:
http://clearcase.weintraubworld.net/