The list of your projects. Project data files (including datasets, models and jobs) are stored in ~/.cnsface in four-letter code folders. The project tab only shows the name of the project.
Fast mode - run.
Reflection file format check.
Allows renaming a project. Four-letter code associated with the project remains unchanged.
Project browser to access project content and preferences.
Datasets are shown in the table that lists the following parameters: name, space group, unit cell parameters (a,b,c,alpha,beta,gamma), low and high resolution limits. These parameters can be changed by clicking on the appropriate fields and editing them inline. Changes will only take effect after you click the Apply button (although fields will not reset when you change tabs and then come bak to Datasets). Parameters that were changed but not applied will be highlighted until changes are applied. Space group symbol should conform to CNS format (e.g. P2(1)2(1)2, not p21212).
Currently selected dataset will be used for any CNS job you try to run (if required). This is potentially problematic if your style is to have one huge project with dozens of reflection files. In such case you may use a wrong reflection file. I can only suggest using one dataset per project.
Import a dataset. Currently, only CNS-formatted reflection files can be imported (although no format check is implemented). When you import a file, it is copied to the project folder, and you can continue working with it even if the original reflection file is moved.
Removes the currently selected dataset. Issues two warnings. After that, dataset and all associated information will be deleted. If the dataset was used with any of the jobs, Coot launcher will fail to generate the map.
Applies all the changes made manually to dataset parameters.
Copies all the parameters from the next-to-last dataset to the last one. Useful if you ad another dataset that has identical parameters. Will be obsoleted when MTZ import is implemented. Currently, every new imported dataset inherits parameters automatically, so you rarely need this option anyway.
MTZ and SCA files import.
Reflection file format check.
Input files are stored in ~/.cnsface/inputs in processed format. Default files are kept in /usr/share/cnsface/inputs, and can be imported from this central location to replace the customized set of the current user. This customization allows user to have a comfortable environment with personal set of defaults. Input files are shown as a single branch deep tree (sub-subfolders are ignored) arranged according to the directory record of the respective CNS script.
This opens the dialog identical to one invoked by CNS script import. Script default parameters and their status cn then be customized.
Runs the selected script. Model and dataset (if required) are taken from current selections in the appropriate tabs (requested if not selected yet). This first opens the dialog where commonly used parameters are selected and then executes the automatically generated CNS script. A new item will appear in the Jobs tab.
Same as Run button, except that the script content will be shown in a separate window before launching the job. At that point, user can manually edit the scriipt.
Removes the selected script from the library.
When this is checked, all the options available for a particular script will be shown.
Fast mode - run.
Reflection file format check.
Models are associated with projects. Every project folder has a pdb subfolder where models are stored in folders under randomly generated four letter codes. Every time a model is modified (i.e. when the coordinate_outfile is selcted as a fill-in value), the new model instance is generated. The model's nth instance is saved as ####.pdb.n. Model has to be selected using radio buttons to specify which to use when launching jobs (afaic, there is really no reason to keep more than one model at a time). In this case, the latest instance is used.
Imports a new model from pdb file. Creates the zero instance, no checks are done regarding the content of the pdb file being imported.
This will export the selected model instance into an external pdb file. Useful if you want to switch to another refinement program and don't like the brilliant idea of starting from scratch.
Generates the MTF (molecular topology file). This is done using a version of generate_easy that allows for protein, waters, DNA/RNA and ions. Ligand improt is not implemented yet. The resulting log file is checked for errors and user alerted if any are found. This is not very intelligent yet, so you need to make sure your imported PDB file is OK.
Verifies if current version of the MTF is compatible with the most recent instance of the model. This and "MTF..." buttons will be obsoleted at some point, since the MTF is checked and (re)generated, if necessary, whenever you launch a job.
Removes the model (not a particular instance!). GUI asks you twice to reconsider, and there is a good reason for that - the model folder is irreversibly erased.
Show the content of the pdb file associated with the selected model instance.
This affects the behavior of the View button. When checked, only the REMARK records will be shown.
Model transfer between projects
Instance selection for job launching
Read the instance statistics
Convert an imported file to conform to CNS format
Every project has a job folder with subfolders named with the job id number (numbers are never recycled). Each particular job follder contains the input file used to launch a CNS job and the corresponding log file (these have standard names). The tab contains the table of processed jobs that lists the following:
|Job ID||Unique number assigned to each job. Job IDs are recycled, i.e. if a job is deleted, the counter is adjusted to reflect the smallest possible job ID. When a model instance is generated by a job, the job ID is given in parentheses in the model tab (the model instance ID will not match the job ID).|
|Script||The name of the script launched by the job. By default, this will correspond to the script file name as listed in inputs tab.|
|Comment||This is equivalent (at least for now) to the comment associated with the input CNS script used to launch the job.|
|Status||STARTING: job launched but CNS not invoked yet
RUNNING: CNS_SOLVE running
FINISHED: job finished
KILLED: job was terminated by user
FAILED: not implemeted yet
|Date||When the job was launched.|
|Elapsed/CPU time||When the job is running, this shows the elapsed time (updates every fie seconds). When the job is finished, it replaces it with the CPU time from CNS log file. This will not match the actual time it took to run the job, and I feel that the parallelized version of CNS reports the CPU time as if it was running on a single processor (the real processing time will be roughly 4 times shorter if you are among the lucky ones with $500 quad core machine <mad cackle>.|
Get the list of the ouput files produced by the script. Excludes script.in and cnsfrun.log files. Will show the content of the file if you select it. Useful for the scripts producing meningful output (e.g. list of B-factors, rmsd between two structures, etc.)
This will launch Coot (if you have it installed). If the job selected produced an output model, it will be loaded. Some limited file conversion includes fixing the element column (so that your bonds are not all white) and adding the CRYST1 record (so that you can have symmetry related copies). If the job required a dataset (e.g. minimization, annealing, B-factor refinement), the mtz-file will be generated and loaded. Once you are done rebuilding, just save your final coordinates in a file and close Coot. CNSFACE will ask if you want to import it as the next instance of the model. This feedback system will be broken if you either exit CNSFACE before closing Coot or save coordinates not in the default directory (~/.cnsface/####/jobs/ID/coot where #### is the 4-letter project code and ID is the job ID).
The file conversion is currently limited. The Coot feedback loop was so far tested successfully for making minor changes to the structure (e.g. fixing sidechains), adding waters, ions, and terminal residues. What has not been implemented yet (but will be) is working with DNA, ligands and alternate conformers.
Shows the log file.
Not implemented yet. It will allow to rerun the same script but using the latest model.
Kills the selected job. The job is not removed from the job list but assigned the KILLED status.
Deletes the selected job from the database. It is in fact erases the job folder so that the job ID can be recycled if possible.
Delete multiple jobs
Edit job comments