Omneity's Core Technologies

Omneity uses many existing frameworks and libraries.

It is early days in the investigation of how appropriate these technologies are, and how easily they can be used, and how decoupled they can remain, but here is the current thinking.

The Core
Omneity is based around the OSGi framework for plugin systems.

The Apache felix framework is being used as the reference framework and iPOJO to simplify the code a lot (you may find a look through iPOJO Gotchas useful if you're new to iPOJO).

Knowledge
Knowledge representation, and consequently the core language used by Omneity is OWL 2.

Two frameworks are being considered:


 * Jena
 * OpenRDF's Sesame
 * There is also a Sesame-Jena adapter.

Capture
Knowledge capture will be performed by a number of custom plugins to interpret underlying VCS and ALM tools into OWL 2 representations. Some of these will be realtime, others will be polled into a local persistent store.

Text indexing will be provided by Lucene (not directly related to the KR component, but provides a useful additional search feature).

OCR libraries will allow additional information capture (useful for capturing information from non-text PDFs for example).

Processing
There will undoubtedly be several processing units created, but the default DL reasoner will be Pellet.

The SAIL API is being investigated as a way to decouple the reasoners from the KR framework and store (note, SAIL is supported by the Blueprints interface to graph databases).

Storage
Knowledge will be stored in a distributed P2P network, but local to each node information will be held in graph databases, at the moment several possibilities are being looked at, including:


 * Stardog —this seems to be commercial, which may discount it.
 * Neo4J

In order to maintain independence between Omneity's core and the datastore, the Blueprints connector API will be used.

Logging
Omneity code with use the LogBack api for logging, but will redirect all logging through the OSGi logging service.

Why not directly use the OSGi logging service? Firstly, some bundles will consist of third-party code that is re-bundled for use in the OSGi framework and these third-party packages will not use the Logging Service directly, so an API redirect will be required anyway. Secondly, many of the packages developed for Omneity will be applicable to non-OSGi applications so using the OSGi logging service interface directly would require non-OSGi clients to provide a new logging interface—rather than do that, using a standard logging interface and redirecting it in Omneity seems cleaner (especially since this redirect will have to be done for other package anyway).

Communication
Communication between Omneity peers and to outside non-Omneity clients/servers will take several forms. Between peers the principal communication channel will be a JXTA P2P network (such a scheme is discussed briefly in and ).

Communication with external client and servers will be plugin specific. These services will be provided on an as-needed basis in order to keep Omneity agent size appropriate (there's no sense, for example, providing a web service interface on an agent that is solely responsible for providing information to an Omneity network from a set of Subversion repositories).

In the issue of dynamic application change is also discussed (specifically, using dynamic AOP to handle dynamic service availability). This is of interest because Omneity will be able to dynamically share services and functionality among peers (with suitable security safeguards), making these new services and features available to the local agent requires dynamic updates—the alternative, restarting the agent, is unacceptable for most applications of Omneity agents.

Configuration
Configuration will be handled in a similar fashion to logging. This is a little more involved (and may not be totally practicable) because in an OSGi environment dynamic changes to configuration can (should) be handled using ConfigAdmin. This is generally not something that can be easily retrofitted to third-party packages unless those packages provide an API for dynamically changing the internal configuration.

Basic installation and system wide configuration settings should be made available to users using commons logging, while user runtime configuration is persisted in the agent's local database or Config Admin cache, as described here.