using xpans as proxy for xpaset/xpaget (as in chandra-ed): Currently, when xpans is used as a proxy connection, an xpaset or xpaget command will return without waiting for the remote callback to complete. This is because xpans sets the mode of XPASetFd()/XPAGetFd() to "dofork=true", to ensure that the XPAClientLoopFork() in client.c is used. That routine does not wait for the callback to complete. Instead it forks the callback and then "fakes" the completion. Using the forked loop and returning immediately prevents xpans from hanging during a long xpaget/xpaset callback. But starting with ds9 7.0, this causes problems: two successive xpaset calls which both process regions will hang the connection to ds9. For example, in chandra-ed's zhds9 script, a new region is generated by refinepos, then the old region is deleted and the new region is sent to ds9 for display: # refinepos creates a new region ... ${XPASET:-xpaset} -p $XPA regions deleteall ${XPASET:-xpaset} $XPA regions < $OFILE This will hang the xpans connection to ds9, starting with ds9 7.0. Bill reports that the region parsing got slightly slower with 7.0 because he is creating a C++ object to pass to the lexer. Somehow that causes a race condition? To get around the problem, a delay is needed between the calls: ${XPASET:-xpaset} -p $XPA regions deleteall # this is needed to avoid a race condition, due to the fact that # xpaset returns immediately when run via xpans proxy sleep 1 ${XPASET:-xpaset} $XPA regions < $OFILE How can this be handled automatically? Ideally, one would like to fork() or create a thread at the "right" place in xpans to handle the xpaset/xpaget call so that it can wait for completion. But its unclear what that place is: the XPAHandler() is calling the send or receive callback and expects a status response, and how can fork() deal with that??