|  |     We're facing the same problem with two global sections we use for
    validation and translation.  Our plans are to have two locks associated
    with each section.  Lock one is used to indicate ownership of the
    editing function.  Only one editor of a global section at a time
    across a cluster.  Lock two is used to fire a blocking AST on a
    background process running on each system in the cluster.
    
    When a user wants to edit a section, grab the "edit" lock, perform
    the edit, and flush the section out to disk.  Grab the "AST" lock,
    to notify all the background processes that they should update.
    Each of them force an update of the section from disk.  Voila! 
    A cluster-wide global section update.
    
    At least that's what I'm thinking ...
    
    thanks,
    bob.
 | 
|  |     You could do it with one lock:
    
    	CR mode:	reader
    	PW mode:	editor
    	EX mode:	editor "broadcast" of updates
    
    When the editor is done editing, it flushes the section, then
    promotes the lock to EX mode.  This sends a blocking AST to
    all readers, who demote to NL mode, re-read the section, and
    then re-request the lock in CR mode.
    
    When the editor gets the EX lock granted, it knows that all
    the readers have gotten the message, and releases the lock.
    Then, then readers' re-requests (CR mode) are granted, and
    they can continue.
    
    The real problems with this whole approach are that VMS may
    decide to page in some of the section while it is actually
    in the process of being flushed by the editor.  This can
    cause the section to be inconsistent (unless you can guarantee
    atomicity of updates at a disk-block level).  For example,
    a "pointer" from one block to another may appear to be broken
    because a "new" block has been paged in prior to the sending of
    the "broadcast" message.
    
    In VAX DBMS and Rdb/VMS, we implemented "cluster-wide global
    sections", but the basic difference is that we use page-file
    sections on each node, and "manually" do $QIOs to read and
    write the data in the page-file to the actual data file.
    By preventing VMS from paging directly to the data file, we
    are able to control the refreshing to ensure consistency.
    
    We then use the lock manager to synchronize copies in the cluster,
    much as you described.
    
    One difference, though, is that users on the same node as the
    "editor" should not be required to re-read the section.  We
    avoid this by using the lock value block in the lock to save
    a "version number" for the current copy of the section.  This version
    number is also "stashed" in the page-file section (local to each
    node).  If this is the same as the reader's, the reader doesn't have to
    refresh the section, since his node has the current one already.
            
    -steve-
 | 
|  |     RE: .-1
    	Humm, I like.  My goal was to get some ideas on the translations
    without using a section.  The problems that I see include inability
    to grow the section on the fly, leading to wasted space.  As it
    is, the section is 6700 pages big.  Are there better ways?  An indexed
    file with mucho global buffers?  Too much overhead I think, and
    hardly blazing speed.  Any other thoughts?  I am looking for ideas.
 | 
|  |     SEC$M_PAGFIL sections combined with LIB$mumble_TREE calls?
    
    If you do your locking right, you should never have to worry about
    a page being brought in on another processor when you're doing a
    flush.  Always lock the section before doing anything to it, and
    grab it EX mode before a flush.  Nothing will get paged in on ANY
    processor until the requestor can get a lock, and he won't get the
    lock until the flush is complete.  Or am I missing something?
    
    #ken	:-)}
 |