Difference between revisions of "Hackathon wikidb"

From GMOD
Jump to: navigation, search
(Genome wiki from chado notes)
(Hackathon wikidb components)
Line 8: Line 8:
 
*** XORT/chado xml scripts to load output of wiki/wikidb tables to chado; Josh
 
*** XORT/chado xml scripts to load output of wiki/wikidb tables to chado; Josh
  
 +
=Planned outcome=
 +
Simple example to collect gene(s) information from Chado db, produce intermediate
 +
Wiki-text file (script 1).  This is then loaded into Mediawiki database with gene page templates (script 2).  Community folks edit the genes thru Table Edit mechanism as desired.  Then updated gene info is dumped (from mysql wikidb
  
 
=Genome wiki from chado notes=
 
=Genome wiki from chado notes=

Revision as of 14:19, 24 August 2007

Hackathon wikidb components

  • middleware parts:
    • Chado to wiki:
      • modware to select gene attributes by gene name, print genes as wiki-string; Eric
      • wikiloader to add to create gene page, select gene page/table template, add gene wiki-string; Jim
    • Wiki to Chado:
      • XORT/chado xml scripts to load output of wiki/wikidb tables to chado; Josh

Planned outcome

Simple example to collect gene(s) information from Chado db, produce intermediate Wiki-text file (script 1). This is then loaded into Mediawiki database with gene page templates (script 2). Community folks edit the genes thru Table Edit mechanism as desired. Then updated gene info is dumped (from mysql wikidb

Genome wiki from chado notes

- From hackathon

  • tasks:
    • locate sample chado data (some format) for some genes w/ attributes
    • convert to some format suited to wiki loading (as wiki xml?)
      • dump table via Chado SQL;
 see e.g. http://eugenes.org/gmod/genbank2chado/conf/v_genepage3.sql
      • via xml/xslt transforms
      • via XORT perl parser
      • other
    • load to wiki
    >> this is larger;loading into wikipedia db via wikipedia.xml
         
    • dump wiki table edit (mysql db)
    • convert to chado xml (? xml transforms)
  ** flybase harvard has scripts for general bulk data to chado.xml  
  • options:
    • use chado sql view/procedure to dump tables suited to wikibox_db ?
    • easier