Anatomy of Z2

The z2-Environment was built using the following guiding principles:

  1. Do everything as late as reasonably possible ("on demand")
  2. Make sure a system is as self-contained,  as modifiyable, and as transparent as possible  ("the source is the system")

with the additional constraint that typical Java application design approaches and frameworks would still work of course. So in short, principle #1 says that there is as little as possible to be done before execution. This includes in particular compilation, packaging, and upload of binaries (what is often referred to as deployment).

The second principle says that there should be little to no knowledge or algorithm external to the human-readable system definition in the repositories. Again this includes build infrastructure or external configuration sources.

In order to build anything meaningful and extensible on top of these principles, "things" in z2 need to have names - which leads us  to...

The z2 Component Model

Components in z2 refer to anything the z2 environment can understand and translate into a runtime resource that, for example, executes some code, starts a web server, updates logging configuration etc. The z2 component model is about entities with a life cycle that is managed by z2.

Components are declared in the persistence of a so-called component repository - for example the Subversion-based repository. They are organised, by convention, into a two-step hierarchy, the outer hierarchy denoting some taxonomy that is meant to organize things for humans, a project or module, while the second level separates components as understood by z2.

(Example: The environment project in z2_base)

Components have properties and optionally any number of file type resources. In the latter case, a component is declared as a folder holding a file z.properties. If the component has no other resources and is declared only using properties, the folder structure may be omitted and a single property file suffices.

It is important to note that the resources of a component will not be retrieved from a repository unless the component is used (in accordance with principle #1). Instead the properties of all components are aggregated into repository indices that allow z2 to find and identify components by queries over their properties and only then download additional resources when really required.

As such the z2 core runtime has very little understanding of component semantics. The only types of component built-in are Component Factories and Java Components. All other component types are added to the system as component factories.

A component factory is in charge of turning a component declaration into a managed resource. A managed resource may depend on other resources (and hence components) and look them up as necessary.  The resource implementation created by a component factory is in charge of doing anything needed to implement the component semantics. In the case of a Java component this means to check whether some source code requires compilation and do so accordingly. In the case of a web application this means to make sure a web server is up and register the web application with it.

(From component descriptor to resource)

The z2 component model is highly extensible and introducing new component types is simple. Component types range from logging configuration and worker process configurations to re-usable modules of applications.

See also:


Conventions in z2

A generic component model as the one outlined above lives on conventions. One of those is the two level naming hierarchy. Any other structure would work, but two levels are just sufficient and not yet confusing.

Another convention is that within a project (or module), one component of type "com.zfabrik.java", i.e. a Java component, plays the role of the default place to search for implementations of other components in that module. For example, the component factory for Web applications (implementing the component type "com.zfabrik.ee.webapp") will use the private loader of the project's Java component as the parent of the Web application class loader - which means that the project's Java component is the logical place to put the Web application's Java code.

Yet another example is the environment project. By convention this is the place to put all configurations that describe a system's relationship with the technical environment around it. This includes the used Home layout, worker processes, target states,  Web server configuration, data sources, logging setup, even simple user realms. This provides a single place to identify non-code but just external config differences between code lines. Also, using the local development approach this leads to a single place to touch to test drive a system with modified configuration.

See also:

Modes of Operation

When you start the z2-Environment as described in the tutorial, you use the server mode: This mode is characterized by the home process that manages one or more worker processes, as described in its home layout, that are configured to attain some target states. The server mode is useful to operate a single or a cluster of possibly heterogeneously designated processing nodes over a single system definition. So z2 fully enables you to operate a landscape with lots of managed Java VMs that serve different purposes but all live on one source of code and configuration - without tricky deployments, build, nor local configuration trickery.

(server mode)

Alternatively z2 can be used in the embedded mode: In that case the z2 core runtime is initialized within any kind of Java process as described in ProcessRunner. Essentially the server mode is simply a smart application of the embedded mode. The embedded mode is extremely useful when you want to benefit from the no-deployment distribution of code and config but the actual process execution is not completely under your control or the server mode does simply not fit your bill. We used this model to run Hadoop Map/Reduce Jobs implemented in z2 (again: The key here is: no build, no deployment).

(embedded mode)

See also:

Change Management

Typically a z2 system, either for development, testing, or in production, is defined in one or more exclusively assigned branches in some Subversion repository (until more SCMs are supported). Rather than using artefacts from many different sources and of a per-artefact choice of version, the z2 approach puts a strong weight on consistency and knowing exactly what is "in a system".

While this may sound like it creates a lot of copies of artefacts that are reused from elsewhere (which it potentially does), it also makes sure that you can always debug and patch - in system - without fighting your ways through a lot of layer boundaries.

As a result, most life cycle management operations can be mapped to operations over the versioning system - so that standard tools and scripts provide full access, transparency, security, and auditing. In particular, copying or cloning a system - e.g. for a specific feature development - is as easy as branching a code line.

In order to achieve consistency between different codelines (or systems for that matter), it is important to understand the desired flow of changes in your development and production setup. Typically organisations use a triple of codelines for an "established" system: One development code line, one test or integration code line and finally the production code line. In reality, feature branches may be split off from the development code line and the flow of changes is not strictly uni-directional, as changes in the test code line may need to flow back into the development code line.

(typical change flow)

In larger setups, multiple of such dev->test->prod chains may be linked by establishing a change flow between development code lines, possibly with an intermediate integration code line for simpler "rebasing".

(multi-system flows)

To support this style if change management, for Subversion the z2 environment provides a simple "transport tool". It is implemented in the component com.zfabrik.dev.transport/tool. You can launch it from the command line by going into <z2_home>/run/bin and invoking

./lw.sh com.zfabrik.dev.transport/tool

on Linux/OS X or

lw com.zfabrik.dev.transport/tool

on Windows. Currently the tool requires a running z2 environment.

(transport tool)

That tool helps you create a Subversion workspace that holds all the modified projects from one repository, ready for submitting to another repository. For example assume the last change transport between repository A and B happened at revsion 100 of A. In that case you would enter 100 as source revision for repository A and would ask the tool to identify all modified projects (including or excluding projects that are not even existing in B). The tool would create a list of projects that have been modified and, when asked, create a workspace that holds all those projects checked out from B with all changes in A applied.

It is your job then to verify if those changes should enter B (watch out for changes in the environment!) and if so commit them to B.

The tool also offers to create a change summary text, assembled from the log messages in A, that can be used to create an informative log message for the commit to B, as at some point in time you will be wondering again what last revision from A made it into B.

More

Want to learn more? Don't miss the complete documentation: Complete Documentation.