Last week I wrote about how I modeled the Wine Ontology into an in-memory graph using JGraphT. This week, I take this one step further and provide methods that allow a user to update the graph in memory. Changes made to the graph are journaled using Prevayler, so they are not lost when the application is restarted. Changes made are also journaled to database for a human operator to review and apply to the master (MySQL) database.
To most people, the flow seems kind of assbackward. However, this can be useful in situations where a corporate ontology is guarded by a group I call the Ontology Police. These are the people who decide what goes into the ontology and where, so if you are unfortunate enough to need a node where they did not intend one to be, the onus would be upon you to provide complete and verbose justification for why exactly you need it and why you cannot solve your problem some other way. If you've been there, you will understand exactly what I am talking about. With this approach, you first put the node in wherever you need it, check out the results, run through your regression tests to verify that nothing bad happened somewhere else and then go back and ask for your node. This gives both you and the Ontology Police a better justification for making the change permanently.
I support the following update operations to the ontology.
- Add an entity - This will add an (id,name) pair into the ontology. The entity will not be connected to any other node at this point.
- Update entity - This will update the name for an existing entity in the ontology. Relationships connecting this node to other nodes will be preserved.
- Remove entity - This will remove the entity from the ontology. Any outgoing relations from this entity to other entities, and any incoming relations from other entities to this entity will be removed as well.
- Add attribute to entity - This will add an attribute to an entity. Attributes are keyed by name, so if the entity has an attribute by the same name, then the new value will be appended to the value.
- Update attribute - This will update the value for an existing attribute for an entity.
- Remove attribute - This will remove an attribute from the entity.
- Add relationship - Add a relationship to the ontology. This will not be connected to anything, its simply a relationship that can be manipulated once added.
- Add fact - This allows a user to relate two entities via a relationship. Reverse relationships are automatically added.
- Remove fact - This allows a user to remove an edge from the ontology. An edge connects two entities in a relationship. Reverse edges are automatically detected and removed.
I already had quite a few of the addXXX() methods in the Ontology class because the DbOntologyLoader was using them to load up the ontology from the database, but I had to add the updateXXX() and removeXXX() methods. The Ontology class is reproduced in its entirety below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 | package com.mycompany.myapp.ontology;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.jgrapht.graph.ClassBasedEdgeFactory;
import org.jgrapht.graph.SimpleDirectedGraph;
public class Ontology implements Serializable {
private static final long serialVersionUID = 8903265933795172508L;
private final Log log = LogFactory.getLog(getClass());
protected Map<Long,Entity> entityMap;
protected Map<Long,Relation> relationMap;
protected SimpleDirectedGraph<Entity,RelationEdge> ontology;
public Ontology() {
entityMap = new HashMap<Long,Entity>();
relationMap = new HashMap<Long,Relation>();
ontology = new SimpleDirectedGraph<Entity,RelationEdge>(
new ClassBasedEdgeFactory<Entity,RelationEdge>(RelationEdge.class));
}
public Entity getEntityById(long entityId) {
return entityMap.get(entityId);
}
public Relation getRelationById(long relationId) {
return relationMap.get(relationId);
}
public Set<Long> getAvailableRelationIds(Entity entity) {
Set<Long> relationIds = new HashSet<Long>();
Set<RelationEdge> relationEdges = ontology.edgesOf(entity);
for (RelationEdge relationEdge : relationEdges) {
relationIds.add(relationEdge.getRelationId());
}
return relationIds;
}
public Set<Entity> getEntitiesRelatedById(Entity entity, long relationId) {
Set<RelationEdge> relationEdges = ontology.outgoingEdgesOf(entity);
Set<Entity> relatedEntities = new HashSet<Entity>();
for (RelationEdge relationEdge : relationEdges) {
if (relationEdge.getRelationId() == relationId) {
Entity relatedEntity = ontology.getEdgeTarget(relationEdge);
relatedEntities.add(relatedEntity);
}
}
return relatedEntities;
}
public void addEntity(Entity entity) {
entityMap.put(entity.getId(), entity);
ontology.addVertex(entity);
}
public void updateEntity(Entity entity) {
Entity entityToUpdate = entityMap.get(entity.getId());
if (entityToUpdate == null) {
return;
}
entityMap.put(entity.getId(), entity);
}
public void removeEntity(Entity entity) {
Entity entityToDelete = entityMap.get(entity.getId());
if (entityToDelete == null) {
return;
}
entityMap.remove(entity.getId());
ontology.removeVertex(entity);
}
public void addAttribute(long entityId, Attribute attribute) {
Entity entityToAddTo = entityMap.get(entityId);
if (entityToAddTo == null) {
return;
}
if (attribute == null) {
return;
}
List<Attribute> newAttributes = new ArrayList<Attribute>();
String attributeName = attribute.getName();
boolean attributeExists = false;
for (Attribute attr : entityToAddTo.getAttributes()) {
if (attributeName.equals(attr.getName())) {
String value = attr.getValue() + "|||" + attribute.getValue();
attr.setValue(value);
attributeExists = true;
}
newAttributes.add(attr);
}
if (! attributeExists) {
newAttributes.add(attribute);
}
entityToAddTo.setAttributes(newAttributes);
entityMap.put(entityId, entityToAddTo);
}
public void updateAttribute(long entityId, Attribute attribute) {
Entity entityToUpdate = entityMap.get(entityId);
if (entityToUpdate == null) {
return;
}
if (attribute == null) {
return;
}
String attributeName = attribute.getName();
List<Attribute> updatedAttributes = new ArrayList<Attribute>();
for (Attribute attr : entityToUpdate.getAttributes()) {
if (attributeName.equals(attr.getName())) {
attr.setValue(attribute.getValue());
}
updatedAttributes.add(attr);
}
entityToUpdate.setAttributes(updatedAttributes);
entityMap.put(entityId, entityToUpdate);
}
public void removeAttribute(long entityId, Attribute attribute) {
Entity entityToUpdate = entityMap.get(entityId);
if (entityToUpdate == null) {
return;
}
if (attribute == null) {
return;
}
String attributeName = attribute.getName();
List<Attribute> updatedAttributes = new ArrayList<Attribute>();
for (Attribute attr : entityToUpdate.getAttributes()) {
if (attributeName.equals(attr.getName())) {
// remove this from the updated list
continue;
}
updatedAttributes.add(attr);
}
entityToUpdate.setAttributes(updatedAttributes);
entityMap.put(entityId, entityToUpdate);
}
public void addRelation(Relation relation) {
relationMap.put(relation.getId(), relation);
}
public void addFact(Fact fact) {
Entity sourceEntity = getEntityById(fact.getSourceEntityId());
if (sourceEntity == null) {
log.error("Source entity(id=" + fact.getSourceEntityId() + ") not available");
return;
}
Entity targetEntity = getEntityById(fact.getTargetEntityId());
if (targetEntity == null) {
log.error("Target entity(id=" + fact.getTargetEntityId() + ") not available");
return;
}
long relationId = fact.getRelationId();
Relation relation = getRelationById(relationId);
if (relation == null) {
log.error("No relation found for relationId: " + relationId);
return;
}
// does fact exist? If so, dont do anything, just return
Set<Long> relationIds = getAvailableRelationIds(sourceEntity);
if (relationIds.contains(relationId)) {
log.info("Fact: " + relation.getName() + "(" +
sourceEntity.getName() + "," + targetEntity.getName() +
") already added to ontology");
return;
}
RelationEdge relationEdge = new RelationEdge();
relationEdge.setRelationId(relationId);
ontology.addEdge(sourceEntity, targetEntity, relationEdge);
if (relationMap.get(-1L * relationId) != null) {
RelationEdge reverseRelationEdge = new RelationEdge();
reverseRelationEdge.setRelationId(-1L * relationId);
ontology.addEdge(targetEntity, sourceEntity, reverseRelationEdge);
}
}
public void removeFact(Fact fact) {
Entity sourceEntity = getEntityById(fact.getSourceEntityId());
if (sourceEntity == null) {
log.error("Source entity(id=" + fact.getSourceEntityId() + ") not available");
return;
}
Entity targetEntity = getEntityById(fact.getTargetEntityId());
if (targetEntity == null) {
log.error("Target entity(id=" + fact.getTargetEntityId() + ") not available");
return;
}
long relationId = fact.getRelationId();
Relation relation = getRelationById(relationId);
if (relation == null) {
log.error("Relation(id=" + relationId + ") not available");
return;
}
boolean isReversibleRelation = (relationMap.get(-1L * relationId) != null);
Set<RelationEdge> edges = ontology.getAllEdges(sourceEntity, targetEntity);
for (RelationEdge edge : edges) {
if (edge.getRelationId() == relationId) {
ontology.removeEdge(edge);
}
if (isReversibleRelation) {
if (edge.getRelationId() == (-1L * relationId)) {
ontology.removeEdge(edge);
}
}
}
}
}
|
I also added an enum that enumerates the different transactions possible in the system, which looks like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | package com.mycompany.myapp.ontology.transactions;
public enum Transactions {
addEntity(1),
updEntity(2),
delEntity(3),
addAttr(4),
updAttr(5),
delAttr(6),
addRel(7),
addFact(8),
delFact(9);
private int transactionId;
Transactions(int transactionId) {
this.transactionId = transactionId;
}
public int id() {
return transactionId;
}
}
|
To make system prevalence possible, we need to wrap this Ontology into a PrevalentSystem, which we do in our test case's @BeforeClass method, as shown below. The @Test is really simple, all it does is test the Entity add, delete and update transactions, and counts the number of vertices (representing Entities) in the ontology graph.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 | package com.mycompany.myapp.ontology;
import java.util.Set;
import javax.sql.DataSource;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.jgrapht.Graph;
import org.junit.Assert;
import org.junit.BeforeClass;
import org.junit.Test;
import org.prevayler.Prevayler;
import org.prevayler.PrevaylerFactory;
import org.springframework.jdbc.datasource.DriverManagerDataSource;
import com.mycompany.myapp.ontology.daos.EntityDao;
import com.mycompany.myapp.ontology.daos.FactDao;
import com.mycompany.myapp.ontology.daos.DbJournaller;
import com.mycompany.myapp.ontology.daos.RelationDao;
import com.mycompany.myapp.ontology.loaders.DbOntologyLoader;
import com.mycompany.myapp.ontology.transactions.EntityAddTransaction;
import com.mycompany.myapp.ontology.transactions.EntityDeleteTransaction;
import com.mycompany.myapp.ontology.transactions.EntityUpdateTransaction;
public class OntologyPrevalenceTest {
private final Log log = LogFactory.getLog(getClass());
private static final String CACHE_DIR = "src/main/resources/cache";
private static Ontology ontology;
private static Prevayler prevalentOntology;
@BeforeClass
public static void setUpBeforeClass() throws Exception {
DataSource dataSource = new DriverManagerDataSource(
"com.mysql.jdbc.Driver", "jdbc:mysql://localhost:3306/ontodb", "root", "xxx");
EntityDao entityDao = new EntityDao();
entityDao.setDataSource(dataSource);
RelationDao relationDao = new RelationDao();
relationDao.setDataSource(dataSource);
FactDao factDao = new FactDao();
factDao.setDataSource(dataSource);
factDao.setEntityDao(entityDao);
factDao.setRelationDao(relationDao);
DbOntologyLoader loader = new DbOntologyLoader();
loader.setEntityDao(entityDao);
loader.setRelationDao(relationDao);
loader.setFactDao(factDao);
ontology = loader.load();
prevalentOntology = PrevaylerFactory.createPrevayler(ontology, CACHE_DIR);
}
@Test
public void testAddEntityWithPrevalence() throws Exception {
log.debug("# vertices =" + ontology.ontology.vertexSet().size());
prevalentOntology.execute(new EntityAddTransaction(1L, -1L, "foo"));
log.debug("# vertices after addEntity tx =" + ontology.ontology.vertexSet().size());
prevalentOntology.execute(new EntityUpdateTransaction(1L, -1L, "bar"));
log.debug("# vertices after updEntity tx =" + ontology.ontology.vertexSet().size());
prevalentOntology.execute(new EntityDeleteTransaction(1L, -1L, "bar"));
log.debug("# vertices after delEntity tx =" + ontology.ontology.vertexSet().size());
}
}
|
Notice also that we execute updates internally by calling an execute() on the prevalent version of the Ontology. We pass in TransactionWithQuery implementations which contains the code for delegating back to the Ontology method. The TransactionWithQuery objects are mostly boilerplate, so if you've seen one, you've pretty much seen them all, so I will only show one here. Here is the EntityAddTransaction.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | package com.mycompany.myapp.ontology.transactions;
import java.util.Date;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.prevayler.TransactionWithQuery;
import com.mycompany.myapp.ontology.Entity;
import com.mycompany.myapp.ontology.Ontology;
import com.mycompany.myapp.ontology.daos.DbJournaller;
public class EntityAddTransaction implements TransactionWithQuery {
private static final long serialVersionUID = 4022640211143804194L;
private final Log log = LogFactory.getLog(getClass());
private long userId;
private long entityId;
private String entityName;
public EntityAddTransaction() {
super();
}
public EntityAddTransaction(long userId, long entityId, String entityName) {
this();
this.userId = userId;
this.entityId = entityId;
this.entityName = entityName;
}
public Object executeAndQuery(Object prevalentSystem, Date executionTime) throws Exception {
Entity entity = ((Ontology) prevalentSystem).getEntityById(entityId);
if (entity != null) {
throw new Exception("Entity(id=" + entityId + ") already exists");
}
entity = new Entity();
entity.setId(entityId);
entity.setName(entityName);
((Ontology) prevalentSystem).addEntity(entity);
DbJournaller.journal(Transactions.addEntity, userId, executionTime, entity);
return entity;
}
}
|
Notice how don't pass in the reference to the Entity object into the EntityAddTransaction constructor, even though that would have been the more natural approach. The natural approach is a Prevayler anti-pattern known as the Baptism Problem. The suggested pattern is to pass in an id and values, then look up the object inside the transaction and make the changes back to it. I used this pattern because it is the prescribed one, even though the other approach (which I copied from this OnJava article) worked in my tests as well.
One thing to note is that each transaction is executed twice by Prevayler, once to check if it can be executed, and the second to actually execute it. This stumped me for a while until I found some discussion of why this is done here and here. Normally this is not a problem, until you want to stick in extra code into the transaction, such as my call to DbJournaller.journal(), which writes out a line into a database journal for later review.
Another problem I faced with the DbJournaller is that the call involves IO, which by definition is not deterministic, which is another requirement for Prevayler. To get around this, I created a DbJournaller class with static methods which is completely self-contained (I was getting NullPointerExceptions on the JdbcTemplate when trying to pass in a Dao with a DataSource pre-injected into it). The DbJournaller is shown below - each method is guarded by a check to see if the data did not already get inserted, so the methods will insert into the journal table only during the first call to the TransactionWithQuery.executeAndReturn() method from Prevayler.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 | package com.mycompany.myapp.ontology.daos;
import java.util.Date;
import javax.sql.DataSource;
import net.sf.json.JSONObject;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.jdbc.core.support.JdbcDaoSupport;
import org.springframework.jdbc.datasource.DriverManagerDataSource;
import com.mycompany.myapp.ontology.Attribute;
import com.mycompany.myapp.ontology.Entity;
import com.mycompany.myapp.ontology.Fact;
import com.mycompany.myapp.ontology.Relation;
import com.mycompany.myapp.ontology.transactions.Transactions;
public class DbJournaller extends JdbcDaoSupport {
private static final Log log = LogFactory.getLog(DbJournaller.class);
private static JdbcTemplate jdbcTemplate;
static {
DataSource dataSource = new DriverManagerDataSource(
"com.mysql.jdbc.Driver", "jdbc:mysql://localhost:3306/ontodb", "root", "xxx");
jdbcTemplate = new JdbcTemplate(dataSource);
}
public static boolean journal(Transactions transaction, long userId, Date executionTime, Object... objs) {
try {
switch (transaction) {
case addEntity: {
Entity entity = (Entity) objs[0];
addEntity(userId, executionTime, entity);
break;
}
case updEntity: {
Entity entity = (Entity) objs[0];
updateEntity(userId, executionTime, entity);
break;
}
case delEntity: {
Entity entity = (Entity) objs[0];
deleteEntity(userId, executionTime, entity);
break;
}
case addAttr: {
Entity entity = (Entity) objs[0];
Attribute attribute = (Attribute) objs[1];
addAttribute(userId, executionTime, entity, attribute);
break;
}
case updAttr: {
Entity entity = (Entity) objs[0];
Attribute attribute = (Attribute) objs[1];
updateAttribute(userId, executionTime, entity, attribute);
break;
}
case delAttr: {
Entity entity = (Entity) objs[0];
Attribute attribute = (Attribute) objs[1];
deleteAttribute(userId, executionTime, entity, attribute);
break;
}
case addRel: {
Relation relation = (Relation) objs[0];
addRelation(userId, executionTime, relation);
break;
}
case addFact: {
Fact fact = (Fact) objs[0];
addFact(userId, executionTime, fact);
break;
}
case delFact: {
Fact fact = (Fact) objs[0];
removeFact(userId, executionTime, fact);
}
default:
break;
}
return true;
} catch (Exception e) {
log.error(e);
return false;
}
}
private static void addEntity(long userId, Date executionTime, Entity entity) {
if (isTransactionApplied(executionTime, userId, Transactions.addEntity)) {
return;
}
JSONObject jsonObj = new JSONObject();
jsonObj.put("id", entity.getId());
jsonObj.put("name", entity.getName());
insertJournal(userId, Transactions.addEntity, jsonObj);
}
private static void updateEntity(long userId, Date executionTime, Entity entity) {
if (isTransactionApplied(executionTime, userId, Transactions.updEntity)) {
return;
}
JSONObject jsonObj = new JSONObject();
jsonObj.put("id", entity.getId());
jsonObj.put("name", entity.getName());
insertJournal(userId, Transactions.updEntity, jsonObj);
}
private static void deleteEntity(long userId, Date executionTime, Entity entity) {
if (isTransactionApplied(executionTime, userId, Transactions.delEntity)) {
return;
}
JSONObject jsonObj = new JSONObject();
jsonObj.put("id", entity.getId());
jsonObj.put("name", entity.getName());
insertJournal(userId, Transactions.delEntity, jsonObj);
}
private static void addAttribute(long userId, Date executionTime, Entity entity, Attribute attribute) {
if (isTransactionApplied(executionTime, userId, Transactions.addAttr)) {
return;
}
JSONObject jsonObj = new JSONObject();
jsonObj.put("entityId", entity.getId());
jsonObj.put("attributeName", attribute.getName());
jsonObj.put("attributeValue", attribute.getValue());
insertJournal(userId, Transactions.addAttr, jsonObj);
}
private static void updateAttribute(long userId, Date executionTime, Entity entity, Attribute attribute) {
if (isTransactionApplied(executionTime, userId, Transactions.updAttr)) {
return;
}
JSONObject jsonObj = new JSONObject();
jsonObj.put("entityId", entity.getId());
jsonObj.put("attributeName", attribute.getName());
jsonObj.put("attributeValue", attribute.getValue());
insertJournal(userId, Transactions.updAttr, jsonObj);
}
private static void deleteAttribute(long userId, Date executionTime, Entity entity, Attribute attribute) {
if (isTransactionApplied(executionTime, userId, Transactions.delAttr)) {
return;
}
JSONObject jsonObj = new JSONObject();
jsonObj.put("entityId", entity.getId());
jsonObj.put("attributeName", attribute.getName());
jsonObj.put("attributeValue", attribute.getValue());
insertJournal(userId, Transactions.delAttr, jsonObj);
}
private static void addRelation(long userId, Date executionTime, Relation relation) {
if (isTransactionApplied(executionTime, userId, Transactions.addRel)) {
return;
}
JSONObject jsonObj = new JSONObject();
jsonObj.put("relationId", relation.getId());
jsonObj.put("relationName", relation.getName());
insertJournal(userId, Transactions.addRel, jsonObj);
}
private static void addFact(long userId, Date executionTime, Fact fact) {
if (isTransactionApplied(executionTime, userId, Transactions.addFact)) {
return;
}
JSONObject jsonObj = new JSONObject();
jsonObj.put("sourceEntityId", fact.getSourceEntityId());
jsonObj.put("targetEntityId", fact.getTargetEntityId());
jsonObj.put("relationId", fact.getRelationId());
insertJournal(userId, Transactions.addFact, jsonObj);
}
private static void removeFact(long userId, Date executionTime, Fact fact) {
if (isTransactionApplied(executionTime, userId, Transactions.delFact)) {
return;
}
JSONObject jsonObj = new JSONObject();
jsonObj.put("sourceEntityId", fact.getSourceEntityId());
jsonObj.put("targetEntityId", fact.getTargetEntityId());
jsonObj.put("relationId", fact.getRelationId());
insertJournal(userId, Transactions.delFact, jsonObj);
}
private static boolean isTransactionApplied(Date executionTime, long userId, Transactions tx) {
int count = jdbcTemplate.queryForInt(
"select count(*) from journal where log_date = ? and user_id = ? and tx_id = ?",
new Object[] {executionTime, userId, tx.id()});
return (count > 0);
}
private static void insertJournal(long userId, Transactions tx, JSONObject json) {
jdbcTemplate.update(
"insert into journal(user_id, tx_id, args) values (?, ?, ?)",
new Object[] {userId, tx.id(), json.toString()});
}
}
|
I added the following new database tables to support this journaling-to-database strategy. The SQL for them is shown below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | create table users (
id int(11) auto_increment not null,
name varchar(32) not null,
primary key(id)
) engine=InnoDB;
insert into users(name) values ('sujit');
create table journal (
log_date timestamp default now() not null,
user_id int(11) not null,
tx_id int(11) not null,
args varchar(64) not null,
primary key(log_date, user_id, tx_id)
) engine=InnoDB;
|
Running the test (above) produces following output. It also produces the journal files in the CACHE_DIR, as well as the database entries in the journal table. I also verified that the journal file is read back on the next startup if it is not deleted. The output (such as it is) is shown below:
1 2 3 4 | # vertices =237
# vertices after addEntity tx =238
# vertices after updEntity tx =238
# vertices after delEntity tx =237
|
It also created a journal file:
1 2 3 | sujit@sirocco $ ls -l src/main/resources/cache/
total 4
-rw-r--r-- 1 sujit sujit 1239 2008-06-07 10:04 0000000000000000001.journal
|
and entries in the journal table in our database.
1 2 3 4 5 6 7 8 9 | mysql> select * from journal;
+---------------------+---------+-------+------------------------+
| log_date | user_id | tx_id | args |
+---------------------+---------+-------+------------------------+
| 2008-06-07 10:04:18 | 1 | 1 | {"id":-1,"name":"foo"} |
| 2008-06-07 10:04:19 | 1 | 2 | {"id":-1,"name":"bar"} |
| 2008-06-07 10:04:19 | 1 | 3 | {"id":-1,"name":"bar"} |
+---------------------+---------+-------+------------------------+
3 rows in set (0.00 sec)
|
I found Prevayler quite easy to work with once I knew how. The product itself is good and works well if you follow some simple rules and if your application happens to satisfy the Prevalent Hypothesis, which says that your data must fit into RAM, now and in the forseeable future. That may or may not be a tall order depending on your application. The founder of the project has rather strong opinions on memory-vs-database usage, which can be a major turn-off to using his project for some people. But regardless of whether you agree with him or not, the product itself is effective and fairly simple to use once you get past the initial learning curve. If you want to get started quickly with Prevayler, you may find the articles available on the Prevayler Links page more useful (at least I found them more useful) than the examples that ship with the distribution.
Update: 2008-06-14
Over the last week, I have been doing some more testing, cleanup and refactoring of the code above, and I found that injecting the database journal call inside the TransactionWithQuery was not working. The problem was that, as mentioned above, each call of a TransactionWithQuery ends up going through the executeAndQuery() method twice, first to check if it can, and then to actually execute it. As a result a transaction was writing two records into the journal table for each transaction. My workaround for this was to check for the execution time, but while that seemed to work for a while, I started seeing cases where the two transaction calls were not in the same second (my database timestamp was year to second). So I had to abandon that approach.
However, going back to my requirements, I needed a mechanism for the user to try out changes to the ontology immediately, and to provide ontology admins with a way to impose manual oversight. So this is the approach I took.
- Removed the database journaling call from within the TransactionWithQuery implementations.
- Replaced the default JournalSerializer which writes Java serialized objects with XStreamSerializer, which writes out the journal as XML snippets.
- This will allow journaling to happen using the standard Prevayler mechanism, at the same time, an admin can take a copy of the journals, remove the ones he doesn't like, and run them through another process to run through the remaining transactions and apply them to the database.
- At this point, the journals can be deleted and the application restarted to produce the ontology that has been blessed by the ontology team.
Here are the changes in my OntologyTest to set up a customized Prevalyer object with the journal serialization mechanism changed to use XStream. I changed the snapshot serialization mechanism to use XStream as well, but I probably won't ever use it.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | // OntologyTest.java
...
@BeforeClass
public static void setUpBeforeClass() throws Exception {
...
ontology = loader.load();
PrevaylerFactory factory = new PrevaylerFactory();
XStreamSerializer xstreamSerializer = new XStreamSerializer();
factory.configureJournalSerializer(xstreamSerializer);
factory.configureSnapshotSerializer(xstreamSerializer);
factory.configurePrevalenceDirectory(CACHE_DIR);
factory.configurePrevalentSystem(ontology);
prevalentOntology = factory.create();
}
...
|
And the test has been beefed up to run through all the available transactions. The test creates two Entities, adds Attributes to one, creates a Relation, relates the two Entities and the Relation into a Fact, makes some updates, then deletes all these objects from the Ontology. Here is the test case:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | // OntologyTest.java
...
@Test
public void testTransactionsWithPrevalence() throws Exception {
prevalentOntology.execute(new EntityAddTransaction(-1L, "foo"));
prevalentOntology.execute(new EntityUpdateTransaction(-1L, "bar"));
prevalentOntology.execute(new EntityAddTransaction(-2L, "baz"));
prevalentOntology.execute(new AttributeAddTransaction(-1L, "name", "barname"));
prevalentOntology.execute(new AttributeUpdateTransaction(-1L, "name", "fooname"));
prevalentOntology.execute(new AttributeDeleteTransaction(-1L, "name", "fooname"));
prevalentOntology.execute(new RelationAddTransaction(-100L, "some relation"));
prevalentOntology.execute(new FactAddTransaction(-1L, -2L, -100L));
prevalentOntology.execute(new FactDeleteTransaction(-1L, -2L, -100L));
prevalentOntology.execute(new EntityDeleteTransaction(-1L, "bar"));
prevalentOntology.execute(new EntityDeleteTransaction(-2L, "baz"));
}
...
|
And the resulting journal file is quite easy to read. Here is my journal file.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | C1;withQuery=true;systemVersion=12;executionTime=1213414210090
<com.mycompany.myapp.ontology.transactions.EntityAddTransaction>
<entityId>-1</entityId>
<entityName>foo</entityName>
</com.mycompany.myapp.ontology.transactions.EntityAddTransaction>
C7;withQuery=true;systemVersion=13;executionTime=1213414211688
<com.mycompany.myapp.ontology.transactions.EntityUpdateTransaction>
<entityId>-1</entityId>
<entityName>bar</entityName>
</com.mycompany.myapp.ontology.transactions.EntityUpdateTransaction>
C1;withQuery=true;systemVersion=14;executionTime=1213414211691
<com.mycompany.myapp.ontology.transactions.EntityAddTransaction>
<entityId>-2</entityId>
<entityName>baz</entityName>
</com.mycompany.myapp.ontology.transactions.EntityAddTransaction>
F9;withQuery=true;systemVersion=15;executionTime=1213414211698
<com.mycompany.myapp.ontology.transactions.AttributeAddTransaction>
<entityId>-1</entityId>
<attributeName>name</attributeName>
<attributeValue>barname</attributeValue>
</com.mycompany.myapp.ontology.transactions.AttributeAddTransaction>
FF;withQuery=true;systemVersion=16;executionTime=1213414211701
<com.mycompany.myapp.ontology.transactions.AttributeUpdateTransaction>
<entityId>-1</entityId>
<attributeName>name</attributeName>
<attributeValue>fooname</attributeValue>
</com.mycompany.myapp.ontology.transactions.AttributeUpdateTransaction>
FF;withQuery=true;systemVersion=17;executionTime=1213414211704
<com.mycompany.myapp.ontology.transactions.AttributeDeleteTransaction>
<entityId>-1</entityId>
<attributeName>name</attributeName>
<attributeValue>fooname</attributeValue>
</com.mycompany.myapp.ontology.transactions.AttributeDeleteTransaction>
D9;withQuery=true;systemVersion=18;executionTime=1213414211707
<com.mycompany.myapp.ontology.transactions.RelationAddTransaction>
<relationId>-100</relationId>
<relationName>some relation</relationName>
</com.mycompany.myapp.ontology.transactions.RelationAddTransaction>
|
As you can see, completely human readable and relatively easy to parse. Each transaction begins with a non-xml header, and within it is the XML Serialized version of the Transaction and its constructor arg values. I haven't written the database converter, but I will pretty soon when I build the UI for the Ontology.
Also, if you are trying to follow along by cutting and pasting the code here and trying to run it locally, my apologies. Code changes have been moving faster than this weekly blog and code that I posted may or may not even look the same anymore. I think it may be more useful to just read the blog at this point and let me know if you have any ideas for improvement.
Update 2009-04-26: In recent posts, I have been building on code written and described in previous posts, so there were (and rightly so) quite a few requests for the code. So I've created a project on Sourceforge to host the code. You will find the complete source code built so far in the project's SVN repository.
No comments:
Post a Comment
Comments are moderated to prevent spam.