Tasks are now in simple text files
The easier tasks should be preferably upper.
Task type: Urgrent, In
progress, Done.
| Task name | Explanation |
|---|---|
| Add importer and devel APIs to the documentation | |
| Notes clone | list records, click on record, edit record, delete record, etc... No security yet, should come later from actors |
| Rapid application development features | Allow the developer to get fast to a finite application by generating add, edit, delete links and JSP file names for sub-objects of an object |
| Full Text Search | One of the most frequently requested features for Karamba is full text search. It should look in a Makumba database for a certain word in all the text fields |
| XML-XSL | Currently makumba represents its data in Java Dictionaires and displays it thru a JSP taglib. The modern way is to represent data in XML and show it via XSL. Useful sometimes, especially when exchanging data with other organisations |
| implement drivers for new DB | Oracle, Informix
There shouldn't be many differences between the current "generic SQL" and any of the "big players". A look at the org.makumba.db.sql.mysql or org.makumba.db.sql.pgsql drivers should give an idea of how this is to be done. There is also some doc about this in the SQL engine pages. The major problem is to find a "free" copy of the respective SQL and to make the JDBC connection to it. |
| improve SQL driver efficiency | Some ideas:
make use of TIMESTAMP to automatically store creation date (TS_create) and modification date (TS_modify). MySQL is one of the suitable engines for this |
| replication, selective replication | Makumba was thought from the beginning to have replication capabilities.
Karamba data will need to live on unconnected desktops, laptops, etc. LBGs
will want to have a local copy for fast access. Ensuring replication between
copies will thus be needed soon. LBGs will only need the data that they
can access (e.g. their SC applicants and not others) so replication needs
to be selective.
Replication is similar to copy in terms of structure and API. Low-level SQL replication (if the engine supports it) might be an idea. |
| Data import from HTML tables | We should be able to import objects from HTML tables that contain a
number of records.
Make a order-dependent parser, use TD markers |
| new syntax/parser | We have talked about a new syntax but the current parser is still using
the old one. Use ANTLR ? we already
use it for the OQL translation, so we take more advantage of antlr.jar
:)
Existing data definition files will have to be translated to the new syntax. There is already a org.makumba.abstr.translator package |