summaryrefslogtreecommitdiffstats
path: root/tcllib/modules/amazon-s3
diff options
context:
space:
mode:
authorWilliam Joye <wjoye@cfa.harvard.edu>2016-10-27 19:39:39 (GMT)
committerWilliam Joye <wjoye@cfa.harvard.edu>2016-10-27 19:39:39 (GMT)
commitea28451286d3ea4a772fa174483f9a7a66bb1ab3 (patch)
tree6ee9d8a7848333a7ceeee3b13d492e40225f8b86 /tcllib/modules/amazon-s3
parentb5ca09bae0d6a1edce939eea03594dd56383f2c8 (diff)
parent7c621da28f07e449ad90c387344f07a453927569 (diff)
downloadblt-ea28451286d3ea4a772fa174483f9a7a66bb1ab3.zip
blt-ea28451286d3ea4a772fa174483f9a7a66bb1ab3.tar.gz
blt-ea28451286d3ea4a772fa174483f9a7a66bb1ab3.tar.bz2
Merge commit '7c621da28f07e449ad90c387344f07a453927569' as 'tcllib'
Diffstat (limited to 'tcllib/modules/amazon-s3')
-rw-r--r--tcllib/modules/amazon-s3/ChangeLog64
-rw-r--r--tcllib/modules/amazon-s3/LICENSE.txt52
-rw-r--r--tcllib/modules/amazon-s3/README.txt56
-rw-r--r--tcllib/modules/amazon-s3/S3.man1450
-rw-r--r--tcllib/modules/amazon-s3/S3.tcl1960
-rw-r--r--tcllib/modules/amazon-s3/S3.test1766
-rw-r--r--tcllib/modules/amazon-s3/TODO.txt20
-rw-r--r--tcllib/modules/amazon-s3/pkgIndex.tcl9
-rw-r--r--tcllib/modules/amazon-s3/test-S3.config2
-rw-r--r--tcllib/modules/amazon-s3/xsxp.man137
-rw-r--r--tcllib/modules/amazon-s3/xsxp.tcl254
-rw-r--r--tcllib/modules/amazon-s3/xsxp.test166
12 files changed, 5936 insertions, 0 deletions
diff --git a/tcllib/modules/amazon-s3/ChangeLog b/tcllib/modules/amazon-s3/ChangeLog
new file mode 100644
index 0000000..643246e
--- /dev/null
+++ b/tcllib/modules/amazon-s3/ChangeLog
@@ -0,0 +1,64 @@
+2013-12-17 Andreas Kupries <andreask@activestate.com>
+
+ * xsxp.man: Fixed missing requirement on the package itself.
+ * S3.man: Ditto.
+
+2013-02-08 Andreas Kupries <andreask@activestate.com>
+
+ * xsxp.man: Fixed moddesc/titledesc confusion.
+ * S3.man: Ditto.
+
+2013-02-01 Andreas Kupries <andreas_kupries@users.sourceforge.net>
+
+ *
+ * Released and tagged Tcllib 1.15 ========================
+ *
+
+2011-12-13 Andreas Kupries <andreas_kupries@users.sourceforge.net>
+
+ *
+ * Released and tagged Tcllib 1.14 ========================
+ *
+
+2011-01-24 Andreas Kupries <andreas_kupries@users.sourceforge.net>
+
+ *
+ * Released and tagged Tcllib 1.13 ========================
+ *
+
+2009-12-07 Andreas Kupries <andreas_kupries@users.sourceforge.net>
+
+ *
+ * Released and tagged Tcllib 1.12 ========================
+ *
+
+2008-12-12 Andreas Kupries <andreas_kupries@users.sourceforge.net>
+
+ *
+ * Released and tagged Tcllib 1.11.1 ========================
+ *
+
+2008-10-16 Andreas Kupries <andreas_kupries@users.sourceforge.net>
+
+ *
+ * Released and tagged Tcllib 1.11 ========================
+ *
+
+2008-09-03 Andreas Kupries <andreas_kupries@users.sourceforge.net>
+
+ * S3.test: Added guards to protect testsuite against a
+ * xsxp.test: missing xml package.
+
+2008-07-08 Andreas Kupries <andreask@activestate.com>
+
+ * xsxp.man: Added boilerplate section to the documentation
+ * S3.man: directing bug reports and other feedback to the Tcllib
+ SF trackers.
+
+ * xsxp.test: Added the boilerplate necessary for integration
+ * S3.test: with tcllib's test framework.
+
+ * New module 'amazon-s3', with packages S3 and xsxp, by Darren
+ New. Physical integration of all the new files.
+ Todo: Integration with the installer, and fixes for the
+ testsuites to use Tcllib's boilerplate.
diff --git a/tcllib/modules/amazon-s3/LICENSE.txt b/tcllib/modules/amazon-s3/LICENSE.txt
new file mode 100644
index 0000000..9f43682
--- /dev/null
+++ b/tcllib/modules/amazon-s3/LICENSE.txt
@@ -0,0 +1,52 @@
+This software is copyrighted by Darren New.
+The following terms apply to all files associated with the software unless
+explicitly disclaimed in individual files.
+
+The authors hereby grant permission to use, copy, modify, distribute,
+and license this software and its documentation for any purpose, provided
+that existing copyright notices are retained in all copies and that this
+notice is included verbatim in any distributions. No written agreement,
+license, or royalty fee is required for any of the authorized uses.
+Modifications to this software may be copyrighted by their authors
+and need not follow the licensing terms described here, provided that
+the new terms are clearly indicated on the first page of each file where
+they apply.
+
+IN NO EVENT SHALL THE AUTHORS OR DISTRIBUTORS BE LIABLE TO ANY PARTY
+FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES
+ARISING OUT OF THE USE OF THIS SOFTWARE, ITS DOCUMENTATION, OR ANY
+DERIVATIVES THEREOF, EVEN IF THE AUTHORS HAVE BEEN ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGE.
+
+THE AUTHORS AND DISTRIBUTORS SPECIFICALLY DISCLAIM ANY WARRANTIES,
+INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT. THIS SOFTWARE
+IS PROVIDED ON AN "AS IS" BASIS, AND THE AUTHORS AND DISTRIBUTORS HAVE
+NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR
+MODIFICATIONS.
+
+GOVERNMENT USE: If you are acquiring this software on behalf of the
+U.S. government, the Government shall have only "Restricted Rights"
+in the software and related documentation as defined in the Federal
+Acquisition Regulations (FARs) in Clause 52.227.19 (c) (2). If you
+are acquiring the software on behalf of the Department of Defense, the
+software shall be classified as "Commercial Computer Software" and the
+Government shall have only "Restricted Rights" as defined in Clause
+252.227-7013 (c) (1) of DFARs. Notwithstanding the foregoing, the
+authors grant the U.S. Government and others acting in its behalf
+permission to use and distribute the software in accordance with the
+terms specified in this license.
+
+*** *** That said ... ** ***
+
+The author would appreciate feedback, bug fixes,
+and improvements in functionality. The author would
+also appreciate acknowlegement if you acknowledge
+contributors in your distribution of work. Neither
+of these are requirements for using or distributing
+the code, modifications thereof, or programs making
+use of it.
+
+The author can be reached for the forseeable future
+at dnew@san.rr.com or my SourceForge account.
+
diff --git a/tcllib/modules/amazon-s3/README.txt b/tcllib/modules/amazon-s3/README.txt
new file mode 100644
index 0000000..3aa79a8
--- /dev/null
+++ b/tcllib/modules/amazon-s3/README.txt
@@ -0,0 +1,56 @@
+This is Darren New's package to interface to amazon's S3 web service.
+This is in beta stage, but potentially useful to others already.
+
+Note that test-S3.config should have your own account identifiers
+entered, in order to run the tcltest.
+
+I'm hoping this eventually makes it into TclLib. To that end, I have
+tried to avoid using any packages that aren't widely available on
+all platforms, sticking with tcllib and ActiveState stuff as much
+as possible.
+
+Note that "xsxp.tcl" and associated packaging is necessary for this
+system. Plus, there are a few places where I used [dict] and {expand}.
+To make this work with 8.5 release, {expand} needs to be changed to {*}.
+To make this work with 8.4, you need a Tcl implementation of [dict]
+and you need to change {expand} into [eval] and [list] as appropriate.
+If you make either of these changes, please bop me back a copy of
+the changes <dnew@san.rr.com> and I'll make a new package.
+
+Manifest:
+
+README.txt - this file.
+LICENSE.txt - the license for use and redistribution. It's BSD.
+S3.man - the beginnings of a Tcl-format man page for S3.
+S3.test - The tcltest calls to the S3 package.
+ (Note that S3::REST has actually been extensively tested by
+ yours truely, but the tests were manual "call the routine,
+ print the results", and I haven't taken time to repackage them
+ in Tcltest format. But I will.
+test-S3.config - a call to S3::Configure to set your personal
+ access identifiers so you can run S3.test.
+S3.tsh - The actual source code for the S3 interface package.
+xsxp.tcl - Extremely Simple XML Parser. It uses the TclXML package
+ to build nested dictionaries, and supplies simple ways of
+ getting to the data. I use it to parse the results from
+ S3's bucket listings and such, because I couldn't get TclDOM
+ to install on my machine.
+xsxp.test - The tcltests for xsxp.
+pkgIndex.tcl - For S3 and xsxp.
+
+A few notes:
+
+I expect to break this into several "layers". S3::REST doesn't
+require any XML parsing. The routines dealing with buckets and
+listings parse the XML to return the information in a useful form.
+
+The bucket deletion test code is disabled because Amazon has
+been having trouble with bucket creation/deletion leaving
+things in an inconsistant state.
+
+FEEDBACK WELCOME! -- Please include me in email for any
+comments or bug reports about the software. Thanks!
+(I usually don't want to be cc'ed on newsgroup posts, but
+this is an exception.)
+
+THANKS!
diff --git a/tcllib/modules/amazon-s3/S3.man b/tcllib/modules/amazon-s3/S3.man
new file mode 100644
index 0000000..809116a
--- /dev/null
+++ b/tcllib/modules/amazon-s3/S3.man
@@ -0,0 +1,1450 @@
+[vset VERSION 1.0.3]
+[manpage_begin S3 n [vset VERSION]]
+[keywords amazon]
+[keywords cloud]
+[keywords s3]
+[moddesc {Amazon S3 Web Service Utilities}]
+[titledesc {Amazon S3 Web Service Interface}]
+[category Networking]
+[copyright {Copyright 2006,2008 Darren New. All Rights Reserved. See LICENSE.TXT for terms.}]
+[require Tcl 8.5]
+[require S3 [opt [vset VERSION]]]
+[require sha1 1.0]
+[require md5 2.0]
+[require base64 2.3]
+[require xsxp 1.0]
+[description]
+This package provides access to Amazon's Simple Storage Solution web service.
+
+[para]
+As a quick summary, Amazon Simple Storage Solution
+provides a for-fee web service allowing the storage of arbitrary data as
+"resources" within "buckets" online.
+See [uri http://www.amazonaws.com/] for details on that system.
+Access to the service is via HTTP (SOAP or REST). Much of this
+documentation will not make sense if you're not familiar with
+the terms and functionality of the Amazon S3 service.
+
+[para]
+This package provides services for reading and writing
+the data items via the REST interface. It also provides some
+higher-level operations. Other packages in the same distribution
+provide for even more functionality.
+
+[para]
+Copyright 2006 Darren New. All Rights Reserved.
+NO WARRANTIES OF ANY TYPE ARE PROVIDED.
+COPYING OR USE INDEMNIFIES THE AUTHOR IN ALL WAYS.
+This software is licensed under essentially the same
+terms as Tcl. See LICENSE.txt for the terms.
+
+[section "ERROR REPORTING"]
+
+The error reporting from this package makes use of $errorCode to
+provide more details on what happened than simply throwing an error.
+Any error caught by the S3 package (and we try to catch them all)
+will return with an $errorCode being a list having at least three
+elements. In all cases, the first element will be "S3". The second
+element will take on one of six values, with that element defining
+the value of the third and subsequent elements. S3::REST does not
+throw an error, but rather returns a dictionary with the keys "error",
+"errorInfo", and "errorCode" set. This allows for reliable background
+use. The possible second elements are these:
+
+[list_begin definitions]
+[def usage] The usage of the package is incorrect. For example,
+a command has been invoked which requires the library to be configured
+before the library has been configured, or an invalid combination of
+options has been specified. The third element of $errorCode supplies
+the name of the parameter that was wrong. The fourth usually provides
+the arguments that were actually supplied to the throwing proc, unless
+the usage error isn't confined to a single proc.
+
+[def local] Something happened on the local system which threw
+an error. For example, a request to upload or download a file was made
+and the file permissions denied that sort of access. The third element
+of $errorCode is the original $errorCode.
+
+[def socket] Something happened with the socket. It closed
+prematurely, or some other condition of failure-to-communicate-with-Amazon
+was detected. The third element of $errorCode is the original $errorCode,
+or sometimes the message from fcopy, or ...?
+
+[def remote] The Amazon web service returned an error code outside
+the 2xx range in the HTTP header. In other words, everything went as
+documented, except this particular case was documented not to work.
+The third element is the dictionary returned from [cmd ::S3::REST].
+Note that S3::REST itself never throws this error, but just returns
+the dictionary. Most of the higher-level commands throw for convenience,
+unless an argument indicates they should not. If something is documented
+as "not throwing an S3 remote error", it means a status return is set
+rather than throwing an error if Amazon returns a non-2XX HTTP result code.
+
+[def notyet] The user obeyed the documentation, but the author
+has not yet gotten around to implementing this feature. (Right now,
+only TLS support and sophisticated permissions fall into this category,
+as well as the S3::Acl command.)
+
+[def xml] The service has returned invalid XML, or XML whose
+schema is unexpected. For the high-level commands that accept
+service XML as input for parsing, this may also be thrown.
+
+[list_end]
+
+[section COMMANDS]
+This package provides several separate levels of complexity.
+
+[list_begin itemized]
+[item]
+The lowest level simply takes arguments to be sent to the service,
+sends them, retrieves the result, and provides it to the caller.
+[emph Note:] This layer allows both synchronous and event-driven
+processing. It depends on the MD5 and SHA1 and base64 packages
+from Tcllib (available at [uri http://core.tcl.tk/tcllib/]).
+Note that [cmd S3::Configure] is required for [cmd S3::REST] to
+work due to the authentication portion, so we put that in the "lowest level."
+
+[item]
+The next layer parses the results of calls, allowing for functionality
+such as uploading only changed files, synchronizing directories,
+and so on. This layer depends on the [package TclXML] package as well as the
+included [package xsxp] package. These packages are package required when
+these more-sophisticated routines are called, so nothing breaks if
+they are not correctly installed.
+
+[item]
+Also included is a separate program that uses the library.
+It provides code to parse $argv0 and $argv from the
+command line, allowing invocation as a tclkit, etc.
+(Not yet implmented.)
+
+[item]
+Another separate program provides a GUI interface allowing drag-and-drop
+and other such functionality. (Not yet implemented.)
+
+[item]
+Also built on this package is the OddJob program. It is
+a separate program designed to allow distribution of
+computational work units over Amazon's Elastic Compute
+Cloud web service.
+
+[list_end]
+
+[para]
+The goal is to have at least the bottom-most layers implemented in
+pure Tcl using only that which comes from widely-available sources,
+such as Tcllib.
+
+[section "LOW LEVEL COMMANDS"]
+These commands do not require any packages not listed above.
+They talk directly to the service, or they are utility or
+configuration routines. Note that the "xsxp" package was
+written to support this package, so it should be available
+wherever you got this package.
+
+[list_begin definitions]
+
+[call [cmd S3::Configure] \
+[opt "[option -reset] [arg boolean]"] \
+[opt "[option -retries] [arg integer]"] \
+[opt "[option -accesskeyid] [arg idstring]"] \
+[opt "[option -secretaccesskey] [arg idstring]"] \
+[opt "[option -service-access-point] [arg FQDN]"] \
+[opt "[option -use-tls] [arg boolean]"] \
+[opt "[option -default-compare] [arg always|never|exists|missing|newer|date|checksum|different]"] \
+[opt "[option -default-separator] [arg string]"] \
+[opt "[option -default-acl] [arg private|public-read|public-read-write|authenticated-read|keep|calc]"] \
+[opt "[option -default-bucket] [arg bucketname]"] \
+]
+
+There is one command for configuration, and that is [cmd S3::Configure].
+If called with no arguments, it returns a
+dictionary of key/value pairs listing all current settings. If called
+with one argument, it returns the value of that single argument. If
+called with two or more arguments, it must be called with pairs of
+arguments, and it applies the changes in order. There is only one set
+of configuration information per interpreter.
+[para]
+The following options are accepted:
+
+[list_begin definitions]
+
+[def "[option -reset] [arg boolean]"]
+By default, false. If true, any previous changes and any changes on the
+same call before the reset option will be returned to default values.
+
+[def "[option -retries] [arg integer]"]
+Default value is 3.
+If Amazon returns a 500 error, a retry after an exponential
+backoff delay will be tried this many times before finally
+throwing the 500 error. This applies to each call to [cmd S3::REST]
+from the higher-level commands, but not to [cmd S3::REST] itself.
+That is, [cmd S3::REST] will always return httpstatus 500 if that's
+what it receives. Functions like [cmd S3::Put] will retry the PUT call,
+and will also retry the GET and HEAD calls used to do content comparison.
+Changing this to 0 will prevent retries and their associated delays.
+In addition, socket errors (i.e., errors whose errorCode starts with
+"S3 socket") will be similarly retried after backoffs.
+
+[def "[option -accesskeyid] [arg idstring]"]
+[def "[option -secretaccesskey] [arg idstring]"]
+Each defaults to an empty string.
+These must be set before any calls are made. This is your S3 ID.
+Once you sign up for an account, go to [uri http://www.amazonaws.com/],
+sign in, go to the "Your Web Services Account" button, pick "AWS
+Access Identifiers", and your access key ID and secret access keys
+will be available. All [cmd S3::REST] calls are authenticated.
+Blame Amazon for the poor choice of names.
+
+[def "[option -service-access-point] [arg FQDN]"]
+Defaults to "s3.amazonaws.com". This is the fully-qualified domain
+name of the server to contact for [cmd S3::REST] calls. You should
+probably never need to touch this, unless someone else implements
+a compatible service, or you wish to test something by pointing
+the library at your own service.
+
+[def "[option -slop-seconds] [arg integer]"]
+When comparing dates between Amazon and the local machine,
+two dates within this many seconds of each other are considered
+the same. Useful for clock drift correction, processing overhead
+time, and so on.
+
+[def "[option -use-tls] [arg boolean]"]
+Defaults to false. This is not yet implemented. If true, [cmd S3::REST] will
+negotiate a TLS connection to Amazon. If false, unencrypted connections
+are used.
+
+[def "[option -bucket-prefix] [arg string]"]
+Defaults to "TclS3". This string is used by [cmd S3::SuggestBucketName]
+if that command is passed an empty string as an argument. It is used
+to distinguish different applications using the Amazon service.
+Your application should always set this to keep from interfering with
+the buckets of other users of Amazon S3 or with other buckets of the
+same user.
+
+[def "[option -default-compare] [arg always|never|exists|missing|newer|date|checksum|different]"]
+Defaults to "always." If no -compare is specified on
+[cmd S3::Put], [cmd S3::Get], or [cmd S3::Delete], this comparison is used.
+See those commands for a description of the meaning.
+
+[def "[option -default-separator] [arg string]"]
+Defaults to "/". This is currently unused. It might make sense to use
+this for [cmd S3::Push] and [cmd S3::Pull], but allowing resources to
+have slashes in their names that aren't marking directories would be
+problematic. Hence, this currently does nothing.
+
+[def "[option -default-acl] [arg private|public-read|public-read-write|authenticated-read|keep|calc]"]
+Defaults to an empty string. If no -acl argument is provided to [cmd S3::Put] or
+[cmd S3::Push], this string is used
+(given as the x-amz-acl header if not keep or calc). If this is also
+empty, no x-amz-acl header is generated.
+This is [emph not] used by [cmd S3::REST].
+
+[def "[option -default-bucket] [arg bucketname]"]
+If no bucket is given to [cmd S3::GetBucket], [cmd S3::PutBucket],
+[cmd S3::Get], [cmd S3::Put],
+[cmd S3::Head], [cmd S3::Acl],
+[cmd S3::Delete], [cmd S3::Push],
+[cmd S3::Pull], or [cmd S3::Toss], and if this configuration variable
+is not an empty string (and not simply "/"), then this value
+will be used for the bucket. This is useful if one program does
+a large amount of resource manipulation within a single bucket.
+
+[list_end]
+
+[para]
+
+[call [cmd S3::SuggestBucket] [opt [arg name]]]
+The [cmd S3::SuggestBucket] command accepts an optional string as
+a prefix and returns a valid bucket containing the [arg name] argument
+and the Access Key ID. This makes the name unique to the owner and
+to the application (assuming the application picks a good [arg name] argument).
+If no name is provided,
+the name from [cmd S3::Configure] [arg -bucket-prefix] is used.
+If that too is empty (which is not the default), an error is thrown.
+
+[call [cmd S3::REST] [arg dict]]
+
+The [cmd S3::REST] command takes as an argument a dictionary and
+returns a dictionary. The return dictionary has the same keys
+as the input dictionary, and includes additional keys as the result.
+The presence or absence of keys in the input dictionary can control
+the behavior of the routine. It never throws an error directly, but
+includes keys "error", "errorInfo", and "errorCode" if necessary.
+Some keys are required, some optional. The routine can run either
+in blocking or non-blocking mode, based on the presense
+of [option resultvar] in the input dictionary. This requires
+the [arg -accesskeyid] and [arg -secretaccesskey] to be configured via
+[cmd S3::Configure] before being called.
+[para]
+The possible input keys are these:
+[list_begin definitions]
+
+[def "[option verb] [arg GET|PUT|DELETE|HEAD]"]
+This required item indicates the verb to be used.
+
+[def "[option resource] [arg string]"]
+This required item indicates the resource to be accessed.
+A leading / is added if not there already. It will
+be URL-encoded for you if necessary. Do not supply a
+resource name that is already URL-encoded.
+
+[def [opt "[option rtype] [arg torrent|acl]"]]
+This indicates a torrent or acl resource is being manipulated.
+Do not include this in the [option resource] key, or the
+"?" separator will get URL-encoded.
+
+[def [opt "[option parameters] [arg dict]"]]
+This optional dictionary provides parameters added to the URL
+for the transaction. The keys must be in the correct case
+(which is confusing in the Amazon documentation) and the
+values must be valid. This can be an empty dictionary or
+omitted entirely if no parameters are desired. No other
+error checking on parameters is performed.
+
+[def [opt "[option headers] [arg dict]"]]
+This optional dictionary provides headers to be added
+to the HTTP request. The keys must be in [emph "lower case"]
+for the authentication to work. The values must not contain
+embedded newlines or carriage returns. This is primarily
+useful for adding x-amz-* headers. Since authentication
+is calculated by [cmd S3::REST], do not add that header here.
+Since content-type gets its own key, also do not add
+that header here.
+
+[def [opt "[option inbody] [arg contentstring]"]]
+This optional item, if provided, gives the content that will
+be sent. It is sent with a tranfer encoding of binary, and
+only the low bytes are used, so use [lb]encoding convertto utf-8[rb]
+if the string is a utf-8 string. This is written all in one blast,
+so if you are using non-blocking mode and the [option inbody] is
+especially large, you may wind up blocking on the write socket.
+
+[def [opt "[option infile] [arg filename]"]]
+This optional item, if provided, and if [option inbody] is not provided,
+names the file from which the body of the HTTP message will be
+constructed. The file is opened for reading and sent progressively
+by [lb]fcopy[rb], so it should not block in non-blocking mode
+even if the file is very large. The file is transfered in
+binary mode, so the bytes on your disk will match the bytes
+in your resource. Due to HTTP restrictions, it must be possible to
+use [lb]file size[rb] on this file to determine the size at the
+start of the transaction.
+
+[def [opt "[option S3chan] [arg channel]"]]
+This optional item, if provided, indicates the already-open socket
+over which the transaction should be conducted. If not provided,
+a connection is made to the service access point specified via
+[cmd S3::Configure], which is normally s3.amazonaws.com. If this
+is provided, the channel is not closed at the end of the transaction.
+
+[def [opt "[option outchan] [arg channel]"]]
+This optional item, if provided, indicates the already-open channel
+to which the body returned from S3 should be written. That is,
+to retrieve a large resource, open a file, set the translation mode,
+and pass the channel as the value of the key outchan. Output
+will be written to the channel in pieces so memory does not fill
+up unnecessarily. The channel is not closed at the end of the transaction.
+
+[def [opt "[option resultvar] [arg varname]"]]
+This optional item, if provided, indicates that [cmd S3::REST] should
+run in non-blocking mode. The [arg varname] should be fully qualified
+with respect to namespaces and cannot be local to a proc. If provided,
+the result of the [cmd S3::REST] call is assigned to this variable once
+everything has completed; use trace or vwait to know when this has happened.
+If this key is not provided, the result is simply returned from the
+call to [cmd S3::REST] and no calls to the eventloop are invoked from
+within this call.
+
+[def [opt "[option throwsocket] [arg throw|return]"]]
+This optional item, if provided, indicates that [cmd S3::REST] should
+throw an error if throwmode is throw and a socket error is encountered.
+It indicates that [cmd S3::REST] should return the error code in the
+returned dictionary if a socket error is encountered and this is
+set to return. If [option throwsocket] is set to [arg return] or
+if the call is not blocking, then a socket error (i.e., an error
+whose error code starts with "S3 socket" will be returned in the
+dictionary as [option error], [option errorInfo], and [option errorCode].
+If a foreground call is made (i.e., [option resultvar] is not provided),
+and this option is not provided or is set to [arg throw], then
+[cmd error] will be invoked instead.
+
+[list_end]
+
+[para]
+Once the call to [cmd S3::REST] completes, a new dict is returned,
+either in the [arg resultvar] or as the result of execution. This dict is
+a copy of the original dict with the results added as new keys. The possible
+new keys are these:
+[list_begin definitions]
+
+[def "[option error] [arg errorstring]"]
+[def "[option errorInfo] [arg errorstring]"]
+[def "[option errorCode] [arg errorstring]"]
+If an error is caught, these three keys will be set in the result.
+Note that [cmd S3::REST] does [emph not] consider a non-2XX HTTP
+return code as an error. The [option errorCode] value will be
+formatted according to the [sectref "ERROR REPORTING"] description.
+If these are present, other keys described here might not be.
+
+[def "[option httpstatus] [arg threedigits]"]
+The three-digit code from the HTTP transaction. 2XX for good,
+5XX for server error, etc.
+
+[def "[option httpmessage] [arg text]"]
+The textual result after the status code. "OK" or "Forbidden"
+or etc.
+
+[def "[option outbody] [arg contentstring]"]
+If [arg outchan] was not specified, this key will hold a
+reference to the (unencoded) contents of the body returned.
+If Amazon returned an error (a la the httpstatus not a 2XX value),
+the error message will be in [option outbody] or written to
+[option outchan] as appropriate.
+
+[def "[option outheaders] [arg dict]"]
+This contains a dictionary of headers returned by Amazon.
+The keys are always lower case. It's mainly useful for
+finding the x-amz-meta-* headers, if any, although things
+like last-modified and content-type are also useful.
+The keys of this dictionary are always lower case.
+Both keys and values are trimmed of extraneous whitespace.
+
+[list_end]
+[list_end]
+
+[section "HIGH LEVEL COMMANDS"]
+The routines in this section all make use of one or more calls
+to [cmd S3::REST] to do their work, then parse and manage the data
+in a convenient way. All these commands throw errors
+as described in [sectref "ERROR REPORTING"] unless otherwise noted.
+[para]
+In all these commands, all arguments are presented as name/value pairs,
+in any order. All the argument names start with a hyphen.
+[para]
+There are a few options that are common to many
+of the commands, and those common options are documented here.
+[list_begin definitions]
+[def "[option -blocking] [arg boolean]"]
+If provided and specified as false,
+then any calls to [cmd S3:REST] will be non-blocking,
+and internally these routines will call [lb]vwait[rb] to get
+the results. In other words, these routines will return the
+same value, but they'll have event loops running while waiting
+for Amazon.
+
+[def "[option -parse-xml] [arg xmlstring]"]
+If provided, the routine skips actually communicating with
+Amazon, and instead behaves as if the XML string provided
+was returned as the body of the call. Since several of
+these routines allow the return of data in various formats,
+this argument can be used to parse existing XML to extract
+the bits of information that are needed. It's also helpful
+for testing.
+
+[def "[option -bucket] [arg bucketname]"]
+Almost every high-level command needs to know what bucket
+the resources are in. This option specifies that. (Only the
+command to list available buckets does not require this parameter.)
+This does not need to be URL-encoded, even if it contains
+special or non-ASCII characters. May or may not contain leading
+or trailing spaces - commands normalize the bucket. If this is
+not supplied, the value is taken from [cmd "S3::Configure -default-bucket"]
+if that string isn't empty. Note that spaces and slashes are
+always trimmed from both ends and the rest must leave a valid bucket.
+
+[def "[option -resource] [arg resourcename]"]
+This specifies the resource of interest within the bucket.
+It may or may not start with a slash - both cases are handled.
+This does not need to be URL-encoded, even if it contains
+special or non-ASCII characters.
+
+[def "[option -compare] [arg always|never|exists|missing|newer|date|checksum|different]"]
+When commands copy resources to files or files to resources, the caller may specify that the copy should be skipped if the contents are the same. This argument specifies the conditions under which the files should be copied. If it is not passed, the result of [cmd "S3::Configure -default-compare"] is used, which in turn defaults to "always." The meanings of the various values are these:
+
+[list_begin definitions]
+[def [arg always]]
+Always copy the data. This is the default.
+
+[def [arg never]]
+Never copy the data. This is essentially a no-op, except in [cmd S3::Push] and [cmd S3::Pull] where the -delete flag might make a difference.
+
+[def [arg exists]]
+Copy the data only if the destination already exists.
+
+[def [arg missing]]
+Copy the data only if the destination does not already exist.
+
+[def [arg newer]]
+Copy the data if the destination is missing, or if the date on the source is
+newer than the date on the destination by at
+least [cmd "S3::Configure -slop-seconds"] seconds. If the source is
+Amazon, the date is taken from the Last-Modified header. If the
+source is local, it is taken as the mtime of the file. If the source data
+is specified in a string rather than a file, it is taken as right now,
+via [lb]clock seconds[rb].
+
+[def [arg date]]
+Like [arg newer], except copy if the date is newer [emph or] older.
+
+[def [arg checksum]]
+Calculate the MD5 checksum on the local file or string, ask Amazon for the eTag
+of the resource, and copy the data if they're different. Copy the data
+also if the destination is missing. Note that this can be slow with
+large local files unless the C version of the MD5 support is available.
+
+[def [arg different]]
+Copy the data if the destination does not exist.
+If the destination exists and an actual file name was specified
+(rather than a content string),
+and the date on the file differs from the date on the resource,
+copy the data.
+If the data is provided as a content string, the "date" is treated
+as "right now", so it will likely always differ unless slop-seconds is large.
+If the dates are the same, the MD5 checksums are compared, and the
+data is copied if the checksums differ.
+[list_end]
+
+[para]
+Note that "newer" and "date" don't care about the contents, and "checksum" doesn't care about the dates, but "different" checks both.
+
+[call [cmd S3::ListAllMyBuckets] \
+[opt "[option -blocking] [arg boolean]"] \
+[opt "[option -parse-xml] [arg xmlstring]"] \
+[opt "[option -result-type] [arg REST|xml|pxml|dict|names|owner]"] \
+]
+This routine performs a GET on the Amazon S3 service, which is
+defined to return a list of buckets owned by the account identified
+by the authorization header. (Blame Amazon for the dumb names.)
+
+[list_begin definitions]
+[def "[option -blocking] [arg boolean]"]
+See above for standard definition.
+
+[def "[option -parse-xml] [arg xmlstring]"]
+See above for standard definition.
+
+[def "[option -result-type] [arg REST]"]
+The dictionary returned by [cmd S3::REST] is the return value of [cmd S3::ListAllMyBuckets]. In this case, a non-2XX httpstatus will not throw an error. You may not combine this with [arg -parse-xml].
+
+[def "[option -result-type] [arg xml]"]
+The raw XML of the body is returned as the result (with no encoding applied).
+
+[def "[option -result-type] [arg pxml]"]
+The XML of the body as parsed by [cmd xsxp::parse] is returned.
+
+[def "[option -result-type] [arg dict]"]
+A dictionary of interesting portions of the XML is returned. The dictionary contains the following keys:
+
+[list_begin definitions]
+[def Owner/ID] The Amazon AWS ID (in hex) of the owner of the bucket.
+[def Owner/DisplayName] The Amazon AWS ID's Display Name.
+[def Bucket/Name] A list of names, one for each bucket.
+[def Bucket/CreationDate] A list of dates, one for each bucket,
+in the same order as Bucket/Name, in ISO format (as returned by Amazon).
+[list_end]
+
+[para]
+
+[def "[option -result-type] [arg names]"]
+A list of bucket names is returned with all other information stripped out.
+This is the default result type for this command.
+
+[def "[option -result-type] [arg owner]"]
+A list containing two elements is returned. The first element is
+the owner's ID, and the second is the owner's display name.
+
+[list_end]
+
+[para]
+
+[call [cmd S3::PutBucket] \
+[opt "[option -bucket] [arg bucketname]"] \
+[opt "[option -blocking] [arg boolean]"] \
+[opt "[option -acl] [arg {{}|private|public-read|public-read-write|authenticated-read}]"] \
+]
+This command creates a bucket if it does not already exist. Bucket names are
+globally unique, so you may get a "Forbidden" error from Amazon even if you
+cannot see the bucket in [cmd S3::ListAllMyBuckets]. See [cmd S3::SuggestBucket] for ways to minimize this risk. The x-amz-acl header comes from the [option -acl] option, or from [cmd "S3::Configure -default-acl"] if not specified.
+
+[call [cmd S3::DeleteBucket] \
+[opt "[option -bucket] [arg bucketname]"] \
+[opt "[option -blocking] [arg boolean]"] \
+]
+This command deletes a bucket if it is empty and you have such permission.
+Note that Amazon's list of buckets is a global resource, requiring
+far-flung synchronization. If you delete a bucket, it may be quite
+a few minutes (or hours) before you can recreate it, yielding "Conflict"
+errors until then.
+
+[call [cmd S3::GetBucket] \
+[opt "[option -bucket] [arg bucketname]"] \
+[opt "[option -blocking] [arg boolean]"] \
+[opt "[option -parse-xml] [arg xmlstring]"] \
+[opt "[option -max-count] [arg integer]"] \
+[opt "[option -prefix] [arg prefixstring]" ] \
+[opt "[option -delimiter] [arg delimiterstring]" ] \
+[opt "[option -result-type] [arg REST|xml|pxml|names|dict]"] \
+]
+This lists the contents of a bucket. That is, it returns a directory
+listing of resources within a bucket, rather than transfering any
+user data.
+[list_begin definitions]
+
+[def "[option -bucket] [arg bucketname]"] The standard bucket argument.
+
+[def "[option -blocking] [arg boolean]"] The standard blocking argument.
+
+[def "[option -parse-xml] [arg xmlstring]"] The standard parse-xml argument.
+
+[def "[option -max-count] [arg integer]"]
+If supplied, this is the most number of records to be returned.
+If not supplied, the code will iterate until all records have been found.
+Not compatible with -parse-xml. Note that if this is supplied, only
+one call to [cmd S3::REST] will be made. Otherwise, enough calls
+will be made to exhaust the listing, buffering results in memory,
+so take care if you may have huge buckets.
+
+[def "[option -prefix] [arg prefixstring]"]
+If present, restricts listing to resources with a particular prefix. One
+leading / is stripped if present.
+
+[def "[option -delimiter] [arg delimiterstring]"]
+If present, specifies a delimiter for the listing.
+The presence of this will summarize multiple resources
+into one entry, as if S3 supported directories. See the
+Amazon documentation for details.
+
+[def "[option -result-type] [arg REST|xml|pxml|names|dict]"]
+This indicates the format of the return result of the command.
+
+[list_begin definitions]
+[def REST]
+If [arg -max-count] is specified, the dictionary returned
+from [cmd S3::REST] is returned. If [arg -max-count] is
+not specified, a list of all the dictionaries returned from
+the one or more calls to [cmd S3::REST] is returned.
+
+[def xml]
+If [arg -max-count] is specified, the body returned
+from [cmd S3::REST] is returned. If [arg -max-count] is
+not specified, a list of all the bodies returned from
+the one or more calls to [cmd S3::REST] is returned.
+
+[def pxml]
+If [arg -max-count] is specified, the body returned
+from [cmd S3::REST] is passed throught [cmd xsxp::parse] and then returned.
+If [arg -max-count] is
+not specified, a list of all the bodies returned from
+the one or more calls to [cmd S3::REST] are each passed through
+[cmd xsxp::parse] and then returned.
+
+[def names]
+Returns a list of all names found in either the Contents/Key fields or
+the CommonPrefixes/Prefix fields. If no [arg -delimiter] is specified
+and no [arg -max-count] is specified, this returns a list of all
+resources with the specified [arg -prefix].
+
+[def dict]
+Returns a dictionary. (Returns only one dictionary even if [arg -max-count]
+wasn't specified.) The keys of the dictionary are as follows:
+
+[list_begin definitions]
+[def Name] The name of the bucket (from the final call to [cmd S3::REST]).
+
+[def Prefix] From the final call to [cmd S3::REST].
+[def Marker] From the final call to [cmd S3::REST].
+[def MaxKeys] From the final call to [cmd S3::REST].
+[def IsTruncated] From the final call to [cmd S3::REST], so
+always false if [arg -max-count] is not specified.
+[def NextMarker] Always provided if IsTruncated is true, and
+calculated of Amazon does not provide it. May be empty if IsTruncated is false.
+
+[def Key] A list of names of resources in the bucket matching the [arg -prefix] and [arg -delimiter] restrictions.
+
+[def LastModified] A list of times of resources in the bucket, in the same
+order as Key, in the format returned by Amazon. (I.e., it is not parsed into
+a seconds-from-epoch.)
+
+[def ETag] A list of entity tags (a.k.a. MD5 checksums) in the same order as Key.
+
+[def Size] A list of sizes in bytes of the resources, in the same order as Key.
+
+[def Owner/ID] A list of owners of the resources in the bucket, in the same order as Key.
+
+[def Owner/DisplayName] A list of owners of the resources in the bucket, in the same order as Key. These are the display names.
+
+[def CommonPrefixes/Prefix] A list of prefixes common to multiple entities. This is present only if [arg -delimiter] was supplied.
+
+[list_end]
+
+[list_end]
+
+[list_end]
+
+[call [cmd S3::Put] \
+[opt "[option -bucket] [arg bucketname]"] \
+[option -resource] [arg resourcename] \
+[opt "[option -blocking] [arg boolean]"] \
+[opt "[option -file] [arg filename]"] \
+[opt "[option -content] [arg contentstring]"] \
+[opt "[option -acl] [arg private|public-read|public-read-write|authenticated-read|calc|keep]"] \
+[opt "[option -content-type] [arg contenttypestring]"] \
+[opt "[option -x-amz-meta-*] [arg metadatatext]"] \
+[opt "[option -compare] [arg comparemode]"] \
+]
+
+This command sends data to a resource on Amazon's servers for storage,
+using the HTTP PUT command. It returns 0 if the [option -compare] mode
+prevented the transfer, 1 if the transfer worked, or throws an error
+if the transfer was attempted but failed.
+Server 5XX errors and S3 socket errors are retried
+according to [cmd "S3:Configure -retries"] settings before throwing an error;
+other errors throw immediately.
+
+[list_begin definitions]
+[def [option -bucket]]
+This specifies the bucket into which the resource will be written.
+Leading and/or trailing slashes are removed for you, as are spaces.
+
+[def [option -resource]]
+This is the full name of the resource within the bucket. A single
+leading slash is removed, but not a trailing slash.
+Spaces are not trimmed.
+
+[def [option -blocking]]
+The standard blocking flag.
+
+[def [option -file]]
+If this is specified, the [arg filename] must exist, must be readable,
+and must not be a special or directory file. [lb]file size[rb] must
+apply to it and must not change for the lifetime of the call. The
+default content-type is calculated based on the name and/or contents
+of the file. Specifying this is an error if [option -content] is
+also specified, but at least one of [option -file] or [option -content] must
+be specified. (The file is allowed to not exist or not be readable if
+[option -compare] [arg never] is specified.)
+
+[def [option -content]]
+If this is specified, the [arg contentstring] is sent as the body
+of the resource. The content-type defaults to "application/octet-string".
+Only the low bytes are sent, so non-ASCII should use the appropriate encoding
+(such as [lb]encoding convertto utf-8[rb]) before passing it
+to this routine, if necessary. Specifying this is an error if [option -file]
+is also specified, but at least one of [option -file] or [option -content] must
+be specified.
+
+[def [option -acl]]
+This defaults to [cmd "S3::Configure -default-acl"] if not specified.
+It sets the x-amz-acl header on the PUT operation.
+If the value provided is [arg calc], the x-amz-acl header is
+calculated based on the I/O permissions of the file to be uploaded;
+it is an error to specify [arg calc] and [option -content].
+If the value provided is [arg keep], the acl of the resource
+is read before the PUT (or the default is used if the
+resource does not exist), then set back to what it
+was after the PUT (if it existed). An error will occur if
+the resource is successfully written but the kept ACL cannot
+be then applied. This should never happen.
+[emph Note:] [arg calc] is not currently fully implemented.
+
+[def [option -x-amz-meta-*]]
+If any header starts with "-x-amz-meta-", its contents are added to the
+PUT command to be stored as metadata with the resource. Again, no
+encoding is performed, and the metadata should not contain characters
+like newlines, carriage returns, and so on. It is best to stick with
+simple ASCII strings, or to fix the library in several places.
+
+[def [option -content-type]]
+This overrides the content-type calculated by [option -file] or
+sets the content-type for [option -content].
+
+[def [option -compare]]
+This is the standard compare mode argument. [cmd S3::Put] returns
+1 if the data was copied or 0 if the data was skipped due to
+the comparison mode so indicating it should be skipped.
+
+[list_end]
+
+[para]
+
+[call [cmd S3::Get] \
+[opt "[option -bucket] [arg bucketname]"] \
+[option -resource] [arg resourcename] \
+[opt "[option -blocking] [arg boolean]"] \
+[opt "[option -compare] [arg comparemode]"] \
+[opt "[option -file] [arg filename]"] \
+[opt "[option -content] [arg contentvarname]"] \
+[opt "[option -timestamp] [arg aws|now]"] \
+[opt "[option -headers] [arg headervarname]"] \
+]
+This command retrieves data from a resource on Amazon's S3 servers,
+using the HTTP GET command. It returns 0 if the [option -compare] mode
+prevented the transfer, 1 if the transfer worked, or throws an error
+if the transfer was attempted but failed. Server 5XX errors and S3 socket
+errors are are retried
+according to [cmd S3:Configure] settings before throwing an error;
+other errors throw immediately. Note that this is always authenticated
+as the user configured in via [cmd "S3::Configure -accesskeyid"]. Use
+the Tcllib http for unauthenticated GETs.
+[list_begin definitions]
+
+[def [option -bucket]]
+This specifies the bucket from which the resource will be read.
+Leading and/or trailing slashes are removed for you, as are spaces.
+
+[def [option -resource]]
+This is the full name of the resource within the bucket. A single
+leading slash is removed, but not a trailing slash.
+Spaces are not trimmed.
+
+[def [option -blocking]]
+The standard blocking flag.
+
+[def [option -file]]
+If this is specified, the body of the resource will be read into this file,
+incrementally without pulling it entirely into memory first. The parent
+directory must already exist. If the file already exists, it must be
+writable. If an error is thrown part-way through the process and the
+file already existed, it may be clobbered. If an error is thrown part-way
+through the process and the file did not already exist, any partial
+bits will be deleted. Specifying this is an error if [option -content]
+is also specified, but at least one of [option -file] or [option -content] must
+be specified.
+
+[def [option -timestamp]]
+This is only valid in conjunction with [option -file]. It may be specified
+as [arg now] or [arg aws]. The default is [arg now]. If [arg now], the file's
+modification date is left up to the system. If [arg aws], the file's
+mtime is set to match the Last-Modified header on the resource, synchronizing
+the two appropriately for [option -compare] [arg date] or
+[option -compare] [arg newer].
+
+[def [option -content]]
+If this is specified, the [arg contentvarname] is a variable in the caller's
+scope (not necessarily global) that receives the value of the body of
+the resource. No encoding is done, so if the resource (for example) represents
+a UTF-8 byte sequence, use [lb]encoding convertfrom utf-8[rb] to get a valid
+UTF-8 string. If this is specified, the [option -compare] is ignored unless
+it is [arg never], in which case no assignment to [arg contentvarname] is
+performed. Specifying this is an error if [option -file] is also specified,
+but at least one of [option -file] or [option -content] must be specified.
+
+[def [option -compare]]
+This is the standard compare mode argument. [cmd S3::Get] returns
+1 if the data was copied or 0 if the data was skipped due to
+the comparison mode so indicating it should be skipped.
+
+[def [option -headers]]
+If this is specified, the headers resulting from the fetch are stored
+in the provided variable, as a dictionary. This will include content-type
+and x-amz-meta-* headers, as well as the usual HTTP headers, the x-amz-id
+debugging headers, and so on. If no file is fetched (due to [option -compare]
+or other errors), no assignment to this variable is performed.
+
+[list_end]
+
+[para]
+
+[call [cmd S3::Head] \
+[opt "[option -bucket] [arg bucketname]"] \
+[option -resource] [arg resourcename] \
+[opt "[option -blocking] [arg boolean]"] \
+[opt "[option -dict] [arg dictvarname]"] \
+[opt "[option -headers] [arg headersvarname]"] \
+[opt "[option -status] [arg statusvarname]"] \
+]
+This command requests HEAD from the resource.
+It returns whether a 2XX code was returned as a result
+of the request, never throwing an S3 remote error.
+That is, if this returns 1, the resource exists and is
+accessible. If this returns 0, something went wrong, and the
+[option -status] result can be consulted for details.
+
+[list_begin definitions]
+
+[def [option -bucket]]
+This specifies the bucket from which the resource will be read.
+Leading and/or trailing slashes are removed for you, as are spaces.
+
+[def [option -resource]]
+This is the full name of the resource within the bucket. A single
+leading slash is removed, but not a trailing slash.
+Spaces are not trimmed.
+
+[def [option -blocking]]
+The standard blocking flag.
+
+[def [option -dict]]
+If specified, the resulting dictionary from the [cmd S3::REST]
+call is assigned to the indicated (not necessarily global) variable
+in the caller's scope.
+
+[def [option -headers]]
+If specified, the dictionary of headers from the result are assigned
+to the indicated (not necessarily global) variable in the caller's scope.
+
+[def [option -status]]
+If specified, the indicated (not necessarily global) variable in
+the caller's scope is assigned a 2-element list. The first element is
+the 3-digit HTTP status code, while the second element is
+the HTTP message (such as "OK" or "Forbidden").
+
+[list_end]
+
+[call [cmd S3::GetAcl] \
+[opt "[option -blocking] [arg boolean]"] \
+[opt "[option -bucket] [arg bucketname]"] \
+[option -resource] [arg resourcename] \
+[opt "[option -result-type] [arg REST|xml|pxml]"] \
+]
+
+This command gets the ACL of the indicated resource or throws an
+error if it is unavailable.
+
+[list_begin definitions]
+[def "[option -blocking] [arg boolean]"]
+See above for standard definition.
+
+[def [option -bucket]]
+This specifies the bucket from which the resource will be read.
+Leading and/or trailing slashes are removed for you, as are spaces.
+
+[def [option -resource]]
+This is the full name of the resource within the bucket. A single
+leading slash is removed, but not a trailing slash.
+Spaces are not trimmed.
+
+[def "[option -parse-xml] [arg xml]"]
+The XML from a previous GetACL can be passed in to be parsed into
+dictionary form. In this case, -result-type must be pxml or dict.
+
+[def "[option -result-type] [arg REST]"]
+The dictionary returned by [cmd S3::REST] is the return value of
+[cmd S3::GetAcl]. In this case, a non-2XX httpstatus will not throw an
+error.
+
+[def "[option -result-type] [arg xml]"]
+The raw XML of the body is returned as the result (with no encoding applied).
+
+[def "[option -result-type] [arg pxml]"]
+The XML of the body as parsed by [cmd xsxp::parse] is returned.
+
+[def "[option -result-type] [arg dict]"]
+This fetches the ACL, parses it, and returns a dictionary of two elements.
+
+[para]
+
+The first element has the key "owner" whose value is the canonical ID of the owner of the resource.
+
+[para]
+
+The second element has the key "acl" whose value is a dictionary. Each
+key in the dictionary is one of Amazon's permissions, namely "READ",
+"WRITE", "READ_ACP", "WRITE_ACP", or "FULL_CONTROL". Each value of each
+key is a list of canonical IDs or group URLs that have that permission.
+Elements are not in the list in any particular order, and not all keys
+are necessarily present. Display names are not returned, as they are
+not especially useful; use pxml to obtain them if necessary.
+
+[list_end]
+
+[call [cmd S3::PutAcl] \
+[opt "[option -blocking] [arg boolean]"] \
+[opt "[option -bucket] [arg bucketname]"] \
+[option -resource] [arg resourcename] \
+[opt "[option -acl] [arg new-acl]"] \
+]
+
+This sets the ACL on the indicated resource. It returns the XML written to the ACL, or throws an error if anything went wrong.
+
+[list_begin definitions]
+[def "[option -blocking] [arg boolean]"]
+See above for standard definition.
+
+[def [option -bucket]]
+This specifies the bucket from which the resource will be read.
+Leading and/or trailing slashes are removed for you, as are spaces.
+
+[def [option -resource]]
+This is the full name of the resource within the bucket. A single
+leading slash is removed, but not a trailing slash.
+Spaces are not trimmed.
+
+[def [option -owner]]
+If this is provided, it is assumed to match the owner of the resource.
+Otherwise, a GET may need to be issued against the resource to find
+the owner. If you already have the owner (such as from a call
+to [cmd S3::GetAcl], you can pass the value of the "owner" key
+as the value of this option, and it will be used in the construction
+of the XML.
+
+[def [option -acl]]
+If this option is specified, it provides the ACL the caller wishes
+to write to the resource. If this is not supplied or is empty,
+the value is taken from [cmd "S3::Configure -default-acl"].
+The ACL is written with a PUT to the ?acl resource.
+
+[para]
+
+If the value passed to this option
+starts with "<", it is taken to be a body to be PUT to the ACL resource.
+
+[para]
+
+If the value matches one of the standard Amazon x-amz-acl headers (i.e.,
+a canned access policy), that header is translated to XML and then
+applied. The canned access policies are private, public-read,
+public-read-write, and authenticated-read (in lower case).
+
+[para]
+
+Otherwise, the value is assumed to be a dictionary formatted as the
+"acl" sub-entry within the dict returns by [cmd "S3::GetAcl -result-type dict"].
+The proper XML is generated and applied to the resource. Note that a
+value containing "//" is assumed to be a group, a value containing "@"
+is assumed to be an AmazonCustomerByEmail, and otherwise the value is
+assumed to be a canonical Amazon ID.
+
+[para]
+
+Note that you cannot change the owner, so calling GetAcl on a resource
+owned by one user and applying it via PutAcl on a resource owned by
+another user may not do exactly what you expect.
+
+[list_end]
+
+[call [cmd S3::Delete] \
+[opt "[option -bucket] [arg bucketname]"] \
+[option -resource] [arg resourcename] \
+[opt "[option -blocking] [arg boolean]"] \
+[opt "[option -status] [arg statusvar]"] \
+]
+This command deletes the specified resource from the specified bucket.
+It returns 1 if the resource was deleted successfully, 0 otherwise.
+It returns 0 rather than throwing an S3 remote error.
+
+[list_begin definitions]
+[def [option -bucket]]
+This specifies the bucket from which the resource will be deleted.
+Leading and/or trailing slashes are removed for you, as are spaces.
+
+[def [option -resource]]
+This is the full name of the resource within the bucket. A single
+leading slash is removed, but not a trailing slash.
+Spaces are not trimmed.
+
+[def [option -blocking]]
+The standard blocking flag.
+
+[def [option -status]]
+If specified, the indicated (not necessarily global) variable
+in the caller's scope is set to a two-element list. The first
+element is the 3-digit HTTP status code. The second element
+is the HTTP message (such as "OK" or "Forbidden"). Note that
+Amazon's DELETE result is 204 on success, that being the
+code indicating no content in the returned body.
+
+[list_end]
+
+[para]
+
+[call [cmd S3::Push] \
+[opt "[option -bucket] [arg bucketname]"] \
+[option -directory] [arg directoryname] \
+[opt "[option -prefix] [arg prefixstring]"] \
+[opt "[option -compare] [arg comparemode]"] \
+[opt "[option -x-amz-meta-*] [arg metastring]"] \
+[opt "[option -acl] [arg aclcode]"] \
+[opt "[option -delete] [arg boolean]"] \
+[opt "[option -error] [arg throw|break|continue]"] \
+[opt "[option -progress] [arg scriptprefix]"] \
+]
+This synchronises a local directory with a remote bucket
+by pushing the differences using [cmd S3::Put]. Note that
+if something has changed in the bucket but not locally,
+those changes could be lost. Thus, this is not a general
+two-way synchronization primitive. (See [cmd S3::Sync]
+for that.) Note too that resource names are case sensitive,
+so changing the case of a file on a Windows machine may lead
+to otherwise-unnecessary transfers.
+Note that only regular files are considered, so devices, pipes, symlinks,
+and directories are not copied.
+
+[list_begin definitions]
+
+[def [option -bucket]]
+This names the bucket into which data will be pushed.
+
+[def [option -directory]]
+This names the local directory from which files will be taken.
+It must exist, be readable via [lb]glob[rb] and so on. If only
+some of the files therein are readable, [cmd S3::Push] will PUT
+those files that are readable and return in its results the list
+of files that could not be opened.
+
+[def [option -prefix]]
+This names the prefix that will be added to all resources.
+That is, it is the remote equivalent of [option -directory].
+If it is not specified, the root of the bucket will be treated
+as the remote directory. An example may clarify.
+[example {
+S3::Push -bucket test -directory /tmp/xyz -prefix hello/world
+}]
+In this example, /tmp/xyz/pdq.html will be stored as
+http://s3.amazonaws.com/test/hello/world/pdq.html in Amazon's servers. Also,
+/tmp/xyz/abc/def/Hello will be stored as
+http://s3.amazonaws.com/test/hello/world/abc/def/Hello in Amazon's servers.
+Without the [option -prefix] option, /tmp/xyz/pdq.html would be stored
+as http://s3.amazonaws.com/test/pdq.html.
+
+[def [option -blocking]]
+This is the standard blocking option.
+
+[def [option -compare]]
+If present, this is passed to each invocation of [cmd S3::Put].
+Naturally, [cmd "S3::Configure -default-compare"] is used
+if this is not specified.
+
+[def [option -x-amz-meta-*]]
+If present, this is passed to each invocation of [cmd S3::Put]. All copied
+files will have the same metadata.
+
+[def [option -acl]]
+If present, this is passed to each invocation of [cmd S3::Put].
+
+[def [option -delete]]
+This defaults to false. If true, resources in the destination that
+are not in the source directory are deleted with [cmd S3::Delete].
+Since only regular files are considered, the existance of a symlink,
+pipe, device, or directory in the local source will [emph not]
+prevent the deletion of a remote resource with a corresponding name.
+
+[def [option -error]]
+This controls the behavior of [cmd S3::Push] in the event that
+[cmd S3::Put] throws an error. Note that
+errors encountered on the local file system or in reading the
+list of resources in the remote bucket always throw errors.
+This option allows control over "partial" errors, when some
+files were copied and some were not. [cmd S3::Delete] is always
+finished up, with errors simply recorded in the return result.
+
+[list_begin definitions]
+
+[def throw]
+The error is rethrown with the same errorCode.
+
+[def break]
+Processing stops without throwing an error, the error is recorded
+in the return value, and the command returns with a normal return.
+The calls to [cmd S3::Delete] are not started.
+
+[def continue]
+This is the default. Processing continues without throwing,
+recording the error in the return result, and resuming with the
+next file in the local directory to be copied.
+
+[list_end]
+
+[def [option -progress]]
+If this is specified and the indicated script prefix is not empty, the
+indicated script prefix will be invoked several times in the caller's
+context with additional arguments at various points in the processing.
+This allows progress reporting without backgrounding. The provided
+prefix will be invoked with additional arguments, with the first
+additional argument indicating what part of the process is being
+reported on. The prefix is initially invoked with [arg args] as the
+first additional argument and a dictionary representing the normalized
+arguments to the [cmd S3::Push] call as the second additional argument.
+Then the prefix is invoked with [arg local] as the first additional
+argument and a list of suffixes of the files to be considered as the
+second argument. Then the prefix is invoked with [arg remote] as the
+first additional argument and a list of suffixes existing in the remote
+bucket as the second additional argument. Then, for each file in the
+local list, the prefix will be invoked with [arg start] as the first
+additional argument and the common suffix as the second additional
+argument. When [cmd S3::Put] returns for that file, the prefix will be
+invoked with [arg copy] as the first additional argument, the common
+suffix as the second additional argument, and a third argument that will
+be "copied" (if [cmd S3::Put] sent the resource), "skipped" (if
+[cmd S3::Put] decided not to based on [option -compare]), or the errorCode
+that [cmd S3::Put] threw due to unexpected errors (in which case the
+third argument is a list that starts with "S3"). When all files have
+been transfered, the prefix may be invoked zero or more times with
+[arg delete] as the first additional argument and the suffix of the
+resource being deleted as the second additional argument, with a third
+argument being either an empty string (if the delete worked) or the
+errorCode from [cmd S3::Delete] if it failed. Finally, the prefix
+will be invoked with [arg finished] as the first additional argument
+and the return value as the second additional argument.
+
+[list_end]
+
+The return result from this command is a dictionary. They keys are the
+suffixes (i.e., the common portion of the path after the [option -directory]
+and [option -prefix]), while the values are either "copied", "skipped" (if
+[option -compare] indicated not to copy the file), or the errorCode
+thrown by [cmd S3::Put], as appropriate. If [option -delete] was true,
+there may also be entries for suffixes with the value "deleted" or
+"notdeleted", indicating whether the attempted [cmd S3::Delete]
+worked or not, respectively. There is one additional pair in the return
+result, whose key is the empty string and whose value is a nested dictionary.
+The keys of this nested dictionary include "filescopied" (the number of
+files successfully copied), "bytescopied" (the number of data bytes in
+the files copied, excluding headers, metadata, etc), "compareskipped" (the
+number of files not copied due to [option -compare] mode), "errorskipped"
+(the number of files not copied due to thrown errors), "filesdeleted"
+(the number of resources deleted due to not having corresponding files
+locally, or 0 if [option -delete] is false), and "filesnotdeleted"
+(the number of resources whose deletion was attempted but failed).
+[para]
+Note that this is currently implemented somewhat inefficiently.
+It fetches the bucket listing (including timestamps and eTags),
+then calls [cmd S3::Put], which uses HEAD to find the timestamps
+and eTags again. Correcting this with no API change
+is planned for a future upgrade.
+
+[para]
+
+[call [cmd S3::Pull] \
+[opt "[option -bucket] [arg bucketname]"] \
+[option -directory] [arg directoryname] \
+[opt "[option -prefix] [arg prefixstring]"] \
+[opt "[option -blocking] [arg boolean]"] \
+[opt "[option -compare] [arg comparemode]"] \
+[opt "[option -delete] [arg boolean]"] \
+[opt "[option -timestamp] [arg aws|now]"] \
+[opt "[option -error] [arg throw|break|continue]"] \
+[opt "[option -progress] [arg scriptprefix]"] \
+]
+
+This synchronises a remote bucket with a local directory by pulling the
+differences using [cmd S3::Get] If something has been changed locally but not
+in the bucket, those difference may be lost. This is not a general two-way
+synchronization mechanism. (See [cmd S3::Sync] for that.)
+This creates directories
+if needed; new directories are created with default permissions. Note that
+resource names are case sensitive, so changing the case of a file on a
+Windows machine may lead to otherwise-unnecessary transfers. Also, try not
+to store data in resources that end with a slash, or which are prefixes of
+resources that otherwise would start with a slash; i.e., don't use this if
+you store data in resources whose names have to be directories locally.
+[para]
+Note that this is currently implemented somewhat inefficiently.
+It fetches the bucket listing (including timestamps and eTags),
+then calls [cmd S3::Get], which uses HEAD to find the timestamps
+and eTags again. Correcting this with no API change
+is planned for a future upgrade.
+
+[list_begin definitions]
+
+[def [option -bucket]]
+This names the bucket from which data will be pulled.
+
+[def [option -directory]]
+This names the local directory into which files will be written
+It must exist, be readable via [lb]glob[rb], writable for file creation,
+and so on. If only some of the files therein are writable,
+[cmd S3::Pull] will GET
+those files that are writable and return in its results the list
+of files that could not be opened.
+
+[def [option -prefix]]
+The prefix of resources that will be considered for retrieval.
+See [cmd S3::Push] for more details, examples, etc. (Of course,
+[cmd S3::Pull] reads rather than writes, but the prefix is
+treated similarly.)
+
+[def [option -blocking]]
+This is the standard blocking option.
+
+[def [option -compare]]
+This is passed to each invocation of [cmd S3::Get] if provided.
+Naturally, [cmd "S3::Configure -default-compare"] is
+used if this is not provided.
+
+[def [option -timestamp]]
+This is passed to each invocation of [cmd S3::Get] if provided.
+
+[def [option -delete]]
+If this is specified and true, files that exist in the [option -directory]
+that are not in the [option -prefix] will be deleted after all resources
+have been copied. In addition, empty directories (other than the
+top-level [option -directory]) will be deleted, as
+Amazon S3 has no concept of an empty directory.
+
+[def [option -error]]
+See [cmd S3::Push] for a description of this option.
+
+[def [option -progress]]
+See [cmd S3::Push] for a description of this option.
+It differs slightly in that local directories may be included
+with a trailing slash to indicate they are directories.
+
+[list_end]
+
+The return value from this command is a dictionary. It
+is identical in form and meaning to the description of the
+return result of [cmd S3::Push]. It differs only in that
+directories may be included, with a trailing slash in their name,
+if they are empty and get deleted.
+
+[call [cmd S3::Toss] \
+[opt "[option -bucket] [arg bucketname]"] \
+[option -prefix] [arg prefixstring] \
+[opt "[option -blocking] [arg boolean]"] \
+[opt "[option -error] [arg throw|break|continue]"] \
+[opt "[option -progress] [arg scriptprefix]"] \
+]
+
+This deletes some or all resources within a bucket. It would be
+considered a "recursive delete" had Amazon implemented actual
+directories.
+
+[list_begin options]
+[opt_def -bucket]
+The bucket from which resources will be deleted.
+
+[opt_def [option -blocking]]
+The standard blocking option.
+
+[opt_def [option -prefix]]
+The prefix for resources to be deleted. Any resource that
+starts with this string will be deleted. This is required.
+To delete everything in the bucket, pass an empty string
+for the prefix.
+
+[opt_def [option -error]]
+If this is "throw", [cmd S3::Toss] rethrows any errors
+it encounters. If this is "break", [cmd S3::Toss] returns
+with a normal return after the first error, recording that
+error in the return result. If this is "continue", which is
+the default, [cmd S3::Toss] continues on and lists all
+errors in the return result.
+
+[opt_def [option -progress]]
+If this is specified and not an empty string, the script
+prefix will be invoked several times in the context of the caller
+with additional arguments appended. Initially, it will be invoked
+with the first additional argument being [arg args] and the second
+being the processed list of arguments to [cmd S3::Toss]. Then it
+is invoked with [arg remote] as the first additional argument and
+the list of suffixes in the bucket to be deleted as the second
+additional argument. Then it is invoked with the first additional
+argument being [arg delete] and the second additional argument being
+the suffix deleted and the third additional argument being "deleted"
+or "notdeleted" depending on whether [cmd S3::Delete] threw an error.
+Finally, the script prefix is invoked with a first additional argument
+of "finished" and a second additional argument of the return value.
+
+[list_end]
+
+The return value is a dictionary. The keys are the suffixes of files
+that [cmd S3::Toss] attempted to delete, and whose values are either
+the string "deleted" or "notdeleted". There is also one additional
+pair, whose key is the empty string and whose value is an embedded
+dictionary. The keys of this embedded dictionary include
+"filesdeleted" and "filesnotdeleted", each of which has integer values.
+
+[list_end]
+
+[section LIMITATIONS]
+
+[list_begin itemized]
+
+[item] The pure-Tcl MD5 checking is slow. If you are processing
+files in the megabyte range, consider ensuring binary support is available.
+
+[item] The commands [cmd S3::Pull] and [cmd S3::Push] fetch a
+directory listing which includes timestamps and MD5 hashes,
+then invoke [cmd S3::Get] and [cmd S3::Put]. If
+a complex [option -compare] mode is specified, [cmd S3::Get] and
+[cmd S3::Put] will invoke a HEAD operation for each file to fetch
+timestamps and MD5 hashes of each resource again. It is expected that
+a future release of this package will solve this without any API changes.
+
+[item] The commands [cmd S3::Pull] and [cmd S3::Push] fetch a
+directory listing without using [option -max-count]. The entire
+directory is pulled into memory at once. For very large buckets,
+this could be a performance problem. The author, at this time,
+does not plan to change this behavior. Welcome to Open Source.
+
+[item] [cmd S3::Sync] is neither designed nor implemented yet.
+The intention would be to keep changes synchronised, so changes
+could be made to both the bucket and the local directory and
+be merged by [cmd S3::Sync].
+
+[item] Nor is
+[option -compare] [arg calc] fully implemented. This is primarily due to
+Windows not providing a convenient method for distinguishing between
+local files that are "public-read" or "public-read-write". Assistance
+figuring out TWAPI for this would be appreciated. The U**X semantics
+are difficult to map directly as well. See the source for details.
+Note that there are not tests for calc, since it isn't done yet.
+
+[item] The HTTP processing is implemented within the library,
+rather than using a "real" HTTP package. Hence, multi-line headers
+are not (yet) handled correctly. Do not include carriage returns or
+linefeeds in x-amz-meta-* headers, content-type values, and so on.
+The author does not at this time expect to improve this.
+
+[item] Internally, [cmd S3::Push] and [cmd S3::Pull] and [cmd S3::Toss]
+are all very similar and should be refactored.
+
+[item] The idea of using [option -compare] [arg never]
+[option -delete] [arg true] to delete files that have been
+deleted from one place but not the other yet not copying
+changed files is untested.
+
+[list_end]
+
+[section "USAGE SUGGESTIONS"]
+
+To fetch a "directory" out of a bucket, make changes, and store it back:
+[example_begin]
+file mkdir ./tempfiles
+S3::Pull -bucket sample -prefix of/interest -directory ./tempfiles \
+ -timestamp aws
+do_my_process ./tempfiles other arguments
+S3::Push -bucket sample -prefix of/interest -directory ./tempfiles \
+ -compare newer -delete true
+[example_end]
+
+[para]
+To delete files locally that were deleted off of S3 but not otherwise
+update files:
+
+[example_begin]
+S3::Pull -bucket sample -prefix of/interest -directory ./myfiles \
+ -compare never -delete true
+[example_end]
+
+[section "FUTURE DEVELOPMENTS"]
+
+The author intends to work on several additional projects related to
+this package, in addition to finishing the unfinished features.
+
+[para]
+First, a command-line program allowing browsing of buckets and
+transfer of files from shell scripts and command prompts is useful.
+
+[para]
+Second, a GUI-based program allowing visual manipulation of
+bucket and resource trees not unlike Windows Explorer would
+be useful.
+
+[para]
+Third, a command-line (and perhaps a GUI-based) program called
+"OddJob" that will use S3 to synchronize computation amongst
+multiple servers running OddJob. An S3 bucket will be set up
+with a number of scripts to run, and the OddJob program can
+be invoked on multiple machines to run scripts on all the machines,
+each moving on to the next unstarted task as it finishes each.
+This is still being designed, and it is intended primarily
+to be run on Amazon's Elastic Compute Cloud.
+
+[include ../common-text/tls-security-notes.inc]
+
+[vset CATEGORY amazon-s3]
+[include ../doctools2base/include/feedback.inc]
+[manpage_end]
diff --git a/tcllib/modules/amazon-s3/S3.tcl b/tcllib/modules/amazon-s3/S3.tcl
new file mode 100644
index 0000000..b82a256
--- /dev/null
+++ b/tcllib/modules/amazon-s3/S3.tcl
@@ -0,0 +1,1960 @@
+# S3.tcl
+#
+###Abstract
+# This presents an interface to Amazon's S3 service.
+# The Amazon S3 service allows for reliable storage
+# and retrieval of data via HTTP.
+#
+# Copyright (c) 2006,2008 Darren New. All Rights Reserved.
+#
+###Copyright
+# NO WARRANTIES OF ANY TYPE ARE PROVIDED.
+# COPYING OR USE INDEMNIFIES THE AUTHOR IN ALL WAYS.
+#
+# This software is licensed under essentially the same
+# terms as Tcl. See LICENSE.txt for the terms.
+#
+###Revision String
+# SCCS: %Z% %M% %I% %E% %U%
+#
+###Change history:
+# 0.7.2 - added -default-bucket.
+# 0.8.0 - fixed bug in getLocal using wrong prefix.
+# Upgraded to Tcl 8.5 release version.
+# 1.0.0 - added SetAcl, GetAcl, and -acl keep option.
+#
+
+package require Tcl 8.5
+
+# This is by Darren New too.
+# It is a SAX package to format XML for easy retrieval.
+# It should be in the same distribution as S3.
+package require xsxp
+
+# These three are required to do the auth, so always require them.
+# Note that package registry and package fileutil are required
+# by the individual routines that need them. Grep for "package".
+package require sha1
+package require md5
+package require base64
+
+package provide S3 1.0.3
+
+namespace eval S3 {
+ variable config ; # A dict holding the current configuration.
+ variable config_orig ; # Holds configuration to "reset" back to.
+ variable debug 0 ; # Turns on or off S3::debug
+ variable debuglog 0 ; # Turns on or off debugging into a file
+ variable bgvar_counter 0 ; # Makes unique names for bgvars.
+
+ set config_orig [dict create \
+ -reset false \
+ -retries 3 \
+ -accesskeyid "" -secretaccesskey "" \
+ -service-access-point "s3.amazonaws.com" \
+ -slop-seconds 3 \
+ -use-tls false \
+ -bucket-prefix "TclS3" \
+ -default-compare "always" \
+ -default-separator "/" \
+ -default-acl "" \
+ -default-bucket "" \
+ ]
+
+ set config $config_orig
+}
+
+# Internal, for development. Print a line, and maybe log it.
+proc S3::debuglogline {line} {
+ variable debuglog
+ puts $line
+ if {$debuglog} {
+ set x [open debuglog.txt a]
+ puts $x $line
+ close $x
+ }
+}
+
+# Internal, for development. Print debug info properly formatted.
+proc S3::debug {args} {
+ variable debug
+ variable debuglog
+ if {!$debug} return
+ set res ""
+ if {"-hex" == [lindex $args 0]} {
+ set str [lindex $args 1]
+ foreach ch [split $str {}] {
+ scan $ch %c val
+ append res [format %02x $val]
+ append res " "
+ }
+ debuglogline $res
+ return
+ }
+ if {"-dict" == [lindex $args 0]} {
+ set dict [lindex $args 1]
+ debuglogline "DEBUG dict:"
+ foreach {key val} $dict {
+ set val [string map [list \
+ \r \\r \n \\n \0 \\0 ] $val]
+ debuglogline "$key=$val"
+ }
+ return
+ }
+ set x [string map [list \
+ \r \\r \n \\n \0 \\0 ] $args]
+ debuglogline "DEBUG: $x"
+}
+
+# Internal. Throws an error if keys have not been initialized.
+proc S3::checkinit {} {
+ variable config
+ set error "S3 must be initialized with -accesskeyid and -secretaccesskey before use"
+ set e1 {S3 usage -accesskeyid "S3 identification not initialized"}
+ set e2 {S3 usage -secretaccesskey "S3 identification not initialized"}
+ if {[dict get $config -accesskeyid] eq ""} {
+ error $error "" $e1
+ }
+ if {[dict get $config -secretaccesskey] eq ""} {
+ error $error "" $e2
+ }
+}
+
+# Internal. Calculates the Content-Type for a given file name.
+# Naturally returns application/octet-stream if anything goes wrong.
+proc S3::contenttype {fname} {
+ if {$::tcl_platform(platform) == "windows"} {
+ set extension [file extension $fname]
+ uplevel #0 package require registry
+ set key "\\\\HKEY_CLASSES_ROOT\\"
+ set key "HKEY_CLASSES_ROOT\\"
+ if {"." != [string index $extension 0]} {append key .}
+ append key $extension
+ set ct "application/octet-stream"
+ if {$extension != ""} {
+ catch {set ct [registry get $key {Content Type}]} caught
+ }
+ } else {
+ # Assume something like Unix.
+ if {[file readable /etc/mime.types]} {
+ set extension [string trim [file extension $fname] "."]
+ set f [open /etc/mime.types r]
+ while {-1 != [gets $f line] && ![info exists c]} {
+ set line [string trim $line]
+ if {[string match "#*" $line]} continue
+ if {0 == [string length $line]} continue
+ set items [split $line]
+ for {set i 1} {$i < [llength $items]} {incr i} {
+ if {[lindex $items $i] eq $extension} {
+ set c [lindex $items 0]
+ break
+ }
+ }
+ }
+ close $f
+ if {![info exists c]} {
+ set ct "application/octet-stream"
+ } else {
+ set ct [string trim $c]
+ }
+ } else {
+ # No /etc/mime.types here.
+ if {[catch {exec file -i $fname} res]} {
+ set ct "application/octet-stream"
+ } else {
+ set ct [string range $res [expr {1+[string first : $res]}] end]
+ if {-1 != [string first ";" $ct]} {
+ set ct [string range $ct 0 [string first ";" $ct]]
+ }
+ set ct [string trim $ct "; "]
+ }
+ }
+ }
+ return $ct
+}
+
+# Change current configuration. Not object-oriented, so only one
+# configuration is tracked per interpreter.
+proc S3::Configure {args} {
+ variable config
+ variable config_orig
+ if {[llength $args] == 0} {return $config}
+ if {[llength $args] == 1 && ![dict exists $config [lindex $args 0]]} {
+ error "Bad option \"[lindex $args 0]\": must be [join [dict keys $config] ,\ ]" "" [list S3 usage [lindex $args 0] "Bad option to config"]
+ }
+ if {[llength $args] == 1} {return [dict get $config [lindex $args 0]]}
+ if {[llength $args] % 2 != 0} {
+ error "Config args must be -name val -name val" "" [list S3 usage [lindex $args end] "Odd number of config args"]
+ }
+ set new $config
+ foreach {tag val} $args {
+ if {![dict exists $new $tag]} {
+ error "Bad option \"$tag\": must be [join [dict keys $config] ,\ ]" "" [list S3 usage $tag "Bad option to config"]
+ }
+ dict set new $tag $val
+ if {$tag eq "-reset" && $val} {
+ set new $config_orig
+ }
+ }
+ if {[dict get $config -use-tls]} {
+ error "TLS for S3 not yet implemented!" "" \
+ [list S3 notyet -use-tls $config]
+ }
+ set config $new ; # Only update if all went well
+ return $config
+}
+
+# Suggest a unique bucket name based on usename and config info.
+proc S3::SuggestBucket {{usename ""}} {
+ checkinit
+ if {$usename eq ""} {set usename [::S3::Configure -bucket-prefix]}
+ if {$usename eq ""} {
+ error "S3::SuggestBucket requires name or -bucket-prefix set" \
+ "" [list S3 usage -bucket-prefix]
+ }
+ return $usename\.[::S3::Configure -accesskeyid]
+}
+
+# Calculate authorization token for REST interaction.
+# Doesn't work yet for "Expires" type headers. Hence, only for "REST".
+# We specifically don't call checkinit because it's called in all
+# callers and we don't want to throw an error inside here.
+# Caveat Emptor if you expect otherwise.
+# This is internal, but useful enough you might want to invoke it.
+proc S3::authREST {verb resource content-type headers args} {
+ if {[llength $args] != 0} {
+ set body [lindex $args 0] ; # we use [info exists] later
+ }
+ if {${content-type} != "" && [dict exists $headers content-type]} {
+ set content-type [dict get $headers content-type]
+ }
+ dict unset headers content-type
+ set verb [string toupper $verb]
+ if {[info exists body]} {
+ set content-md5 [::base64::encode [::md5::md5 $body]]
+ dict set headers content-md5 ${content-md5}
+ dict set headers content-length [string length $body]
+ } elseif {[dict exists $headers content-md5]} {
+ set content-md5 [dict get $headers content-md5]
+ } else {
+ set content-md5 ""
+ }
+ if {[dict exists $headers x-amz-date]} {
+ set date ""
+ dict unset headers date
+ } elseif {[dict exists $headers date]} {
+ set date [dict get $headers date]
+ } else {
+ set date [clock format [clock seconds] -gmt true -format \
+ "%a, %d %b %Y %T %Z"]
+ dict set headers date $date
+ }
+ if {${content-type} != ""} {
+ dict set headers content-type ${content-type}
+ }
+ dict set headers host s3.amazonaws.com
+ set xamz ""
+ foreach key [lsort [dict keys $headers x-amz-*]] {
+ # Assume each is seen only once, for now, and is canonical already.
+ append xamz \n[string trim $key]:[string trim [dict get $headers $key]]
+ }
+ set xamz [string trim $xamz]
+ # Hmmm... Amazon lies. No \n after xamz if xamz is empty.
+ if {0 != [string length $xamz]} {append xamz \n}
+ set signthis \
+ "$verb\n${content-md5}\n${content-type}\n$date\n$xamz$resource"
+ S3::debug "Sign this:" $signthis ; S3::debug -hex $signthis
+ set sig [::sha1::hmac [S3::Configure -secretaccesskey] $signthis]
+ set sig [binary format H* $sig]
+ set sig [string trim [::base64::encode $sig]]
+ dict set headers authorization "AWS [S3::Configure -accesskeyid]:$sig"
+ return $headers
+}
+
+# Internal. Takes resource and parameters, tacks them together.
+# Useful enough you might want to invoke it yourself.
+proc S3::to_url {resource parameters} {
+ if {0 == [llength $parameters]} {return $resource}
+ if {-1 == [string first "?" $resource]} {
+ set front ?
+ } else {
+ set front &
+ }
+ foreach {key value} $parameters {
+ append resource $front $key "=" $value
+ set front &
+ }
+ return $resource
+}
+
+# Internal. Encode a URL, including utf-8 versions.
+# Useful enough you might want to invoke it yourself.
+proc S3::encode_url {orig} {
+ set res ""
+ set re {[-a-zA-Z0-9/.,_]}
+ foreach ch [split $orig ""] {
+ if {[regexp $re $ch]} {
+ append res $ch
+ } else {
+ foreach uch [split [encoding convertto utf-8 $ch] ""] {
+ append res "%"
+ binary scan $uch H2 hex
+ append res $hex
+ }
+ }
+ }
+ if {$res ne $orig} {
+ S3::debug "URL Encoded:" $orig $res
+ }
+ return $res
+}
+
+# This is used internally to either queue an event-driven
+# item or to simply call the next routine, depending on
+# whether the current transaction is supposed to be running
+# in the background or not.
+proc S3::nextdo {routine thunk direction args} {
+ global errorCode
+ S3::debug "nextdo" $routine $thunk $direction $args
+ if {[dict get $thunk blocking]} {
+ return [S3::$routine $thunk]
+ } else {
+ if {[llength $args] == 2} {
+ # fcopy failed!
+ S3::fail $thunk "S3 fcopy failed: [lindex $args 1]" "" \
+ [list S3 socket $errorCode]
+ } else {
+ fileevent [dict get $thunk S3chan] $direction \
+ [list S3::$routine $thunk]
+ if {$direction == "writable"} {
+ fileevent [dict get $thunk S3chan] readable {}
+ } else {
+ fileevent [dict get $thunk S3chan] writable {}
+ }
+ }
+ }
+}
+
+# The proverbial It. Do a REST call to Amazon S3 service.
+proc S3::REST {orig} {
+ variable config
+ checkinit
+ set EndPoint [dict get $config -service-access-point]
+
+ # Save the original stuff first.
+ set thunk [dict create orig $orig]
+
+ # Now add to thunk's top-level the important things
+ if {[dict exists $thunk orig resultvar]} {
+ dict set thunk blocking 0
+ } else {
+ dict set thunk blocking 1
+ }
+ if {[dict exists $thunk orig S3chan]} {
+ dict set thunk S3chan [dict get $thunk orig S3chan]
+ } elseif {[dict get $thunk blocking]} {
+ dict set thunk S3chan [socket $EndPoint 80]
+ } else {
+ dict set thunk S3chan [socket -async $EndPoint 80]
+ }
+ fconfigure [dict get $thunk S3chan] -translation binary -encoding binary
+
+ dict set thunk verb [dict get $thunk orig verb]
+ dict set thunk resource [S3::encode_url [dict get $thunk orig resource]]
+ if {[dict exists $orig rtype]} {
+ dict set thunk resource \
+ [dict get $thunk resource]?[dict get $orig rtype]
+ }
+ if {[dict exists $orig headers]} {
+ dict set thunk headers [dict get $orig headers]
+ } else {
+ dict set thunk headers [dict create]
+ }
+ if {[dict exists $orig infile]} {
+ dict set thunk infile [dict get $orig infile]
+ }
+ if {[dict exists $orig content-type]} {
+ dict set thunk content-type [dict get $orig content-type]
+ } else {
+ if {[dict exists $thunk infile]} {
+ set zz [dict get $thunk infile]
+ } else {
+ set zz [dict get $thunk resource]
+ }
+ if {-1 != [string first "?" $zz]} {
+ set zz [string range $zz 0 [expr {[string first "?" $zz]-1}]]
+ set zz [string trim $zz]
+ }
+ if {$zz != ""} {
+ catch {dict set thunk content-type [S3::contenttype $zz]}
+ } else {
+ dict set thunk content-type application/octet-stream
+ dict set thunk content-type ""
+ }
+ }
+ set p {}
+ if {[dict exist $thunk orig parameters]} {
+ set p [dict get $thunk orig parameters]
+ }
+ dict set thunk url [S3::to_url [dict get $thunk resource] $p]
+
+ if {[dict exists $thunk orig inbody]} {
+ dict set thunk headers [S3::authREST \
+ [dict get $thunk verb] [dict get $thunk resource] \
+ [dict get $thunk content-type] [dict get $thunk headers] \
+ [dict get $thunk orig inbody] ]
+ } else {
+ dict set thunk headers [S3::authREST \
+ [dict get $thunk verb] [dict get $thunk resource] \
+ [dict get $thunk content-type] [dict get $thunk headers] ]
+ }
+ # Not the best place to put this code.
+ if {![info exists body] && [dict exists $thunk infile]} {
+ set size [file size [dict get $thunk infile]]
+ set x [dict get $thunk headers]
+ dict set x content-length $size
+ dict set thunk headers $x
+ }
+
+
+ # Ready to go!
+ return [S3::nextdo send_headers $thunk writable]
+}
+
+# Internal. Send the headers to Amazon. Might block if you have
+# really small socket buffers, but Amazon doesn't want
+# data that big anyway.
+proc S3::send_headers {thunk} {
+ S3::debug "Send-headers" $thunk
+ set s3 [dict get $thunk S3chan]
+ puts $s3 "[dict get $thunk verb] [dict get $thunk url] HTTP/1.0"
+ S3::debug ">> [dict get $thunk verb] [dict get $thunk url] HTTP/1.0"
+ foreach {key val} [dict get $thunk headers] {
+ puts $s3 "$key: $val"
+ S3::debug ">> $key: $val"
+ }
+ puts $s3 ""
+ flush $s3
+ return [S3::nextdo send_body $thunk writable]
+}
+
+# Internal. Send the body to Amazon.
+proc S3::send_body {thunk} {
+ global errorCode
+ set s3 [dict get $thunk S3chan]
+ if {[dict exists $thunk orig inbody]} {
+ # Send a string. Let's guess that even in non-blocking
+ # mode, this is small enough or Tcl's smart enough that
+ # we don't blow up the buffer.
+ puts -nonewline $s3 [dict get $thunk orig inbody]
+ flush $s3
+ return [S3::nextdo read_headers $thunk readable]
+ } elseif {![dict exists $thunk orig infile]} {
+ # No body, no file, so nothing more to do.
+ return [S3::nextdo read_headers $thunk readable]
+ } elseif {[dict get $thunk blocking]} {
+ # A blocking file copy. Still not too hard.
+ if {[catch {set inchan [open [dict get $thunk infile] r]} caught]} {
+ S3::fail $thunk "S3 could not open infile - $caught" "" \
+ [list S3 local [dict get $thunk infile] $errorCode]
+ }
+ fconfigure $inchan -translation binary -encoding binary
+ fileevent $s3 readable {}
+ fileevent $s3 writable {}
+ if {[catch {fcopy $inchan $s3 ; flush $s3 ; close $inchan} caught]} {
+ S3::fail $thunk "S3 could not copy infile - $caught" "" \
+ [list S3 local [dict get $thunk infile] $errorCode]
+ }
+ S3::nextdo read_headers $thunk readable
+ } else {
+ # The hard one. Background file copy.
+ fileevent $s3 readable {}
+ fileevent $s3 writable {}
+ if {[catch {set inchan [open [dict get $thunk infile] r]} caught]} {
+ S3::fail $thunk "S3 could not open infile - $caught" "" \
+ [list S3 local [dict get $thunk infile] $errorCode]
+ }
+ fconfigure $inchan -buffering none -translation binary -encoding binary
+ fconfigure $s3 -buffering none -translation binary \
+ -encoding binary -blocking 0 ; # Doesn't work without this?
+ dict set thunk inchan $inchan ; # So we can close it.
+ fcopy $inchan $s3 -command \
+ [list S3::nextdo read_headers $thunk readable]
+ }
+}
+
+# Internal. The first line has come back. Grab out the
+# stuff we care about.
+proc S3::parse_status {thunk line} {
+ # Got the status line
+ S3::debug "<< $line"
+ dict set thunk httpstatusline [string trim $line]
+ dict set thunk outheaders [dict create]
+ regexp {^HTTP/1.. (...) (.*)$} $line junk code message
+ dict set thunk httpstatus $code
+ dict set thunk httpmessage [string trim $message]
+ return $thunk
+}
+
+# A line of header information has come back. Grab it.
+# This probably is unhappy with multiple lines for one
+# header.
+proc S3::parse_header {thunk line} {
+ # Got a header line. For now, assume no continuations.
+ S3::debug "<< $line"
+ set line [string trim $line]
+ set left [string range $line 0 [expr {[string first ":" $line]-1}]]
+ set right [string range $line [expr {[string first ":" $line]+1}] end]
+ set left [string trim [string tolower $left]]
+ set right [string trim $right]
+ dict set thunk outheaders $left $right
+ return $thunk
+}
+
+# I don't know if HTTP requires a blank line after the headers if
+# there's no body.
+
+# Internal. Read all the headers, and throw if we get EOF before
+# we get any headers at all.
+proc S3::read_headers {thunk} {
+ set s3 [dict get $thunk S3chan]
+ flush $s3
+ fconfigure $s3 -blocking [dict get $thunk blocking]
+ if {[dict get $thunk blocking]} {
+ # Blocking. Just read to a blank line. Otherwise,
+ # if we use nextdo here, we wind up nesting horribly.
+ # If we're not blocking, of course, we're returning
+ # to the event loop each time, so that's OK.
+ set count [gets $s3 line]
+ if {[eof $s3]} {
+ S3::fail $thunk "S3 EOF during status line read" "" "S3 socket EOF"
+ }
+ set thunk [S3::parse_status $thunk $line]
+ while {[string trim $line] != ""} {
+ set count [gets $s3 line]
+ if {$count == -1 && 0 == [dict size [dict get $thunk outheaders]]} {
+ S3::fail $thunk "S3 EOF during headers read" "" "S3 socket EOF"
+ }
+ if {[string trim $line] != ""} {
+ set thunk [S3::parse_header $thunk $line]
+ }
+ }
+ return [S3::nextdo read_body $thunk readable]
+ } else {
+ # Non-blocking, so we have to reenter for each line.
+ # First, fix up the file handle, tho.
+ if {[dict exists $thunk inchan]} {
+ close [dict get $thunk inchan]
+ dict unset thunk inchan
+ }
+ # Now get one header.
+ set count [gets $s3 line]
+ if {[eof $s3]} {
+ fileevent $s3 readable {}
+ fileevent $s3 writable {}
+ if {![dict exists $thunk httpstatusline]} {
+ S3::fail $thunk "S3 EOF during status line read" "" "S3 socket EOF"
+ } elseif {0 == [dict size [dict get $thunk outheaders]]} {
+ S3::fail $thunk "S3 EOF during header read" "" "S3 socket EOF"
+ }
+ }
+ if {$count < 0} return ; # Wait for a whole line
+ set line [string trim $line]
+ if {![dict exists $thunk httpstatus]} {
+ set thunk [S3::parse_status $thunk $line]
+ S3::nextdo read_headers $thunk readable ; # New thunk here.
+ } elseif {$line != ""} {
+ set thunk [S3::parse_header $thunk $line]
+ S3::nextdo read_headers $thunk readable ; # New thunk here.
+ } else {
+ # Got an empty line. Switch to copying the body.
+ S3::nextdo read_body $thunk readable
+ }
+ }
+}
+
+# Internal. Read the body of the response.
+proc S3::read_body {thunk} {
+ set s3 [dict get $thunk S3chan]
+ if {[dict get $thunk blocking]} {
+ # Easy. Just read it.
+ if {[dict exists $thunk orig outchan]} {
+ fcopy $s3 [dict get $thunk orig outchan]
+ } else {
+ set x [read $s3]
+ dict set thunk outbody $x
+ #S3::debug "Body: $x" -- Disable unconditional wasteful conversion to string
+ #Need better debug system which does this only when active.
+ }
+ return [S3::nextdo all_done $thunk readable]
+ } else {
+ # Nonblocking mode.
+ if {[dict exists $thunk orig outchan]} {
+ fileevent $s3 readable {}
+ fileevent $s3 writable {}
+ fcopy $s3 [dict get $thunk orig outchan] -command \
+ [list S3::nextdo all_done $thunk readable]
+ } else {
+ dict append thunk outbody [read $s3]
+ if {[eof $s3]} {
+ # We're done.
+ S3::nextdo all_done $thunk readable
+ } else {
+ S3::nextdo read_body $thunk readable
+ }
+ }
+ }
+}
+
+# Internal. Convenience function.
+proc S3::fail {thunk error errorInfo errorCode} {
+ S3::all_done $thunk $error $errorInfo $errorCode
+}
+
+# Internal. We're all done the transaction. Clean up everything,
+# potentially record errors, close channels, etc etc etc.
+proc S3::all_done {thunk {error ""} {errorInfo ""} {errorCode ""}} {
+ set s3 [dict get $thunk S3chan]
+ catch {
+ fileevent $s3 readable {}
+ fileevent $s3 writable {}
+ }
+ if {![dict exists $thunk orig S3chan]} {
+ catch {close $s3}
+ }
+ set res [dict get $thunk orig]
+ catch {
+ dict set res httpstatus [dict get $thunk httpstatus]
+ dict set res httpmessage [dict get $thunk httpmessage]
+ dict set res outheaders [dict get $thunk outheaders]
+ }
+ if {![dict exists $thunk orig outchan]} {
+ if {[dict exists $thunk outbody]} {
+ dict set res outbody [dict get $thunk outbody]
+ } else {
+ # Probably HTTP failure
+ dict set rest outbody {}
+ }
+ }
+ if {$error ne ""} {
+ dict set res error $error
+ dict set res errorInfo $errorInfo
+ dict set res errorCode $errorCode
+ }
+ if {![dict get $thunk blocking]} {
+ after 0 [list uplevel #0 \
+ [list set [dict get $thunk orig resultvar] $res]]
+ }
+ if {$error eq "" || ![dict get $thunk blocking] || \
+ ([dict exists $thunk orig throwsocket] && \
+ "return" == [dict get $thunk orig throwsocket])} {
+ return $res
+ } else {
+ error $error $errorInfo $errorCode
+ }
+}
+
+# Internal. Parse the lst and make sure it has only keys from the 'valid' list.
+# Used to parse arguments going into the higher-level functions.
+proc S3::parseargs1 {lst valid} {
+ if {[llength $lst] % 2 != 0} {
+ error "Option list must be even -name val pairs" \
+ "" [list S3 usage [lindex $lst end] $lst]
+ }
+ foreach {key val} $lst {
+ # Sadly, lsearch applies -glob to the wrong thing for our needs
+ set found 0
+ foreach v $valid {
+ if {[string match $v $key]} {set found 1 ; break}
+ }
+ if {!$found} {
+ error "Option list has invalid -key" \
+ "" [list S3 usage $key $lst]
+ }
+ }
+ return $lst ; # It seems OK
+}
+
+# Internal. Create a variable for higher-level functions to vwait.
+proc S3::bgvar {} {
+ variable bgvar_counter
+ incr bgvar_counter
+ set name ::S3::bgvar$bgvar_counter
+ return $name
+}
+
+# Internal. Given a request and the arguments, run the S3::REST in
+# the foreground or the background as appropriate. Also, do retries
+# for internal errors.
+proc S3::maybebackground {req myargs} {
+ variable config
+ global errorCode errorInfo
+ set mytries [expr {1+[dict get $config -retries]}]
+ set delay 2000
+ dict set req throwsocket return
+ while {1} {
+ if {![dict exists $myargs -blocking] || [dict get $myargs -blocking]} {
+ set dict [S3::REST $req]
+ } else {
+ set res [bgvar]
+ dict set req resultvar $res
+ S3::REST $req
+ vwait $res
+ set dict [set $res]
+ unset $res ; # clean up temps
+ }
+ if {[dict exists $dict error]} {
+ set code [dict get $dict errorCode]
+ if {"S3" != [lindex $code 0] || "socket" != [lindex $code 1]} {
+ error [dict get $dict error] \
+ [dict get $dict errorInfo] \
+ [dict get $dict errorCode]
+ }
+ }
+ incr mytries -1
+ incr delay $delay ; if {20000 < $delay} {set delay 20000}
+ if {"500" ne [dict get $dict httpstatus] || $mytries <= 0} {
+ return $dict
+ }
+ if {![dict exists $myargs -blocking] || [dict get $myargs -blocking]} {
+ after $delay
+ } else {
+ set timer [bgvar]
+ after $delay [list set $timer 1]
+ vwait $timer
+ unset $timer
+ }
+ }
+}
+
+# Internal. Maybe throw an HTTP error if httpstatus not in 200 range.
+proc S3::throwhttp {dict} {
+ set hs [dict get $dict httpstatus]
+ if {![string match "2??" $hs]} {
+ error "S3 received non-OK HTTP result of $hs" "" \
+ [list S3 remote $hs $dict]
+ }
+}
+
+# Public. Returns the list of buckets for this user.
+proc S3::ListAllMyBuckets {args} {
+ checkinit ; # I know this gets done later.
+ set myargs [S3::parseargs1 $args {-blocking -parse-xml -result-type}]
+ if {![dict exists $myargs -result-type]} {
+ dict set myargs -result-type names
+ }
+ if {![dict exists $myargs -blocking]} {
+ dict set myargs -blocking true
+ }
+ set restype [dict get $myargs -result-type]
+ if {$restype eq "REST" && [dict exists $myargs -parse-xml]} {
+ error "Do not use REST with -parse-xml" "" \
+ [list S3 usage -parse-xml $args]
+ }
+ if {![dict exists $myargs -parse-xml]} {
+ # We need to fetch the results.
+ set req [dict create verb GET resource /]
+ set dict [S3::maybebackground $req $myargs]
+ if {$restype eq "REST"} {
+ return $dict ; #we're done!
+ }
+ S3::throwhttp $dict ; #make sure it worked.
+ set xml [dict get $dict outbody]
+ } else {
+ set xml [dict get $myargs -parse-xml]
+ }
+ # Here, we either already returned the dict, or the XML is in "xml".
+ if {$restype eq "xml"} {return $xml}
+ if {[catch {set pxml [::xsxp::parse $xml]}]} {
+ error "S3 invalid XML structure" "" [list S3 usage xml $xml]
+ }
+ if {$restype eq "pxml"} {return $pxml}
+ if {$restype eq "dict" || $restype eq "names"} {
+ set buckets [::xsxp::fetch $pxml "Buckets" %CHILDREN]
+ set names {} ; set dates {}
+ foreach bucket $buckets {
+ lappend names [::xsxp::fetch $bucket "Name" %PCDATA]
+ lappend dates [::xsxp::fetch $bucket "CreationDate" %PCDATA]
+ }
+ if {$restype eq "names"} {
+ return $names
+ } else {
+ return [dict create \
+ Owner/ID [::xsxp::fetch $pxml "Owner/ID" %PCDATA] \
+ Owner/DisplayName \
+ [::xsxp::fetch $pxml "Owner/DisplayName" %PCDATA] \
+ Bucket/Name $names Bucket/Date $dates \
+ ]
+ }
+ }
+ if {$restype eq "owner"} {
+ return [list [::xsxp::fetch $pxml Owner/ID %PCDATA] \
+ [::xsxp::fetch $pxml Owner/DisplayName %PCDATA] ]
+ }
+ error "ListAllMyBuckets requires -result-type to be REST, xml, pxml, dict, owner, or names" "" [list S3 usage -result-type $args]
+}
+
+# Public. Create a bucket.
+proc S3::PutBucket {args} {
+ checkinit
+ set myargs [S3::parseargs1 $args {-blocking -bucket -acl}]
+ if {![dict exists $myargs -acl]} {
+ dict set myargs -acl [S3::Configure -default-acl]
+ }
+ if {![dict exists $myargs -bucket]} {
+ dict set myargs -bucket [S3::Configure -default-bucket]
+ }
+ dict set myargs -bucket [string trim [dict get $myargs -bucket] "/ "]
+ if {"" eq [dict exists $myargs -bucket]} {
+ error "PutBucket requires -bucket" "" [list S3 usage -bucket $args]
+ }
+
+ set req [dict create verb PUT resource /[dict get $myargs -bucket]]
+ if {[dict exists $myargs -acl]} {
+ dict set req headers [list x-amz-acl [dict get $myargs -acl]]
+ }
+ set dict [S3::maybebackground $req $myargs]
+ S3::throwhttp $dict
+ return "" ; # until we decide what to return.
+}
+
+# Public. Delete a bucket.
+proc S3::DeleteBucket {args} {
+ checkinit
+ set myargs [S3::parseargs1 $args {-blocking -bucket}]
+ if {![dict exists $myargs -bucket]} {
+ error "DeleteBucket requires -bucket" "" [list S3 usage -bucket $args]
+ }
+ dict set myargs -bucket [string trim [dict get $args -bucket] "/ "]
+
+ set req [dict create verb DELETE resource /[dict get $myargs -bucket]]
+ set dict [S3::maybebackground $req $myargs]
+ S3::throwhttp $dict
+ return "" ; # until we decide what to return.
+}
+
+# Internal. Suck out the one and only answer from the list, if needed.
+proc S3::firstif {list myargs} {
+ if {[dict exists $myargs -max-keys]} {
+ return [lindex $list 0]
+ } else {
+ return $list
+ }
+}
+
+# Public. Get the list of resources within a bucket.
+proc S3::GetBucket {args} {
+ checkinit
+ set myargs [S3::parseargs1 $args {
+ -bucket -blocking -parse-xml -max-keys
+ -result-type -prefix -delimiter
+ -TEST
+ }]
+ if {![dict exists $myargs -bucket]} {
+ dict set myargs -bucket [S3::Configure -default-bucket]
+ }
+ dict set myargs -bucket [string trim [dict get $myargs -bucket] "/ "]
+ if {"" eq [dict get $myargs -bucket]} {
+ error "GetBucket requires -bucket" "" [list S3 usage -bucket $args]
+ }
+ if {[dict get $myargs -bucket] eq ""} {
+ error "GetBucket requires -bucket nonempty" "" \
+ [list S3 usage -bucket $args]
+ }
+ if {![dict exists $myargs -result-type]} {
+ dict set myargs -result-type names
+ }
+ if {[dict get $myargs -result-type] eq "REST" && \
+ [dict exists $myargs "-parse-xml"]} {
+ error "GetBucket can't have -parse-xml with REST result" "" \
+ [list S3 usage -parse-xml $args]
+ }
+ set req [dict create verb GET resource /[dict get $myargs -bucket]]
+ set parameters {}
+ # Now, just to make test cases easier...
+ if {[dict exists $myargs -TEST]} {
+ dict set parameters max-keys [dict get $myargs -TEST]
+ }
+ # Back to your regularly scheduled argument parsing
+ if {[dict exists $myargs -max-keys]} {
+ dict set parameters max-keys [dict get $myargs -max-keys]
+ }
+ if {[dict exists $myargs -prefix]} {
+ set p [dict get $myargs -prefix]
+ if {[string match "/*" $p]} {
+ set p [string range $p 1 end]
+ }
+ dict set parameters prefix $p
+ }
+ if {[dict exists $myargs -delimiter]} {
+ dict set parameters delimiter [dict get $myargs -delimiter]
+ }
+ set nextmarker0 {} ; # We use this for -result-type dict.
+ if {![dict exists $myargs -parse-xml]} {
+ # Go fetch answers.
+ # Current xaction in "0" vars, with accumulation in "L" vars.
+ # Ultimate result of this loop is $RESTL, a list of REST results.
+ set RESTL [list]
+ while {1} {
+ set req0 $req ; dict set req0 parameters $parameters
+ set REST0 [S3::maybebackground $req0 $myargs]
+ S3::throwhttp $REST0
+ lappend RESTL $REST0
+ if {[dict exists $myargs -max-keys]} {
+ # We were given a limit, so just return the answer.
+ break
+ }
+ set pxml0 [::xsxp::parse [dict get $REST0 outbody]]
+ set trunc0 [expr "true" eq \
+ [::xsxp::fetch $pxml0 IsTruncated %PCDATA]]
+ if {!$trunc0} {
+ # We've retrieved the final block, so go parse it.
+ set nextmarker0 "" ; # For later.
+ break
+ }
+ # Find the highest contents entry. (Would have been
+ # easier if Amazon always supplied NextMarker.)
+ set nextmarker0 {}
+ foreach {only tag} {Contents Key CommonPrefixes Prefix} {
+ set only0 [::xsxp::only $pxml0 $only]
+ if {0 < [llength $only0]} {
+ set k0 [::xsxp::fetch [lindex $only0 end] $tag %PCDATA]
+ if {[string compare $nextmarker0 $k0] < 0} {
+ set nextmarker0 $k0
+ }
+ }
+ }
+ if {$nextmarker0 eq ""} {error "Internal Error in S3 library"}
+ # Here we have the next marker, so fetch the next REST
+ dict set parameters marker $nextmarker0
+ # Note - $nextmarker0 is used way down below again!
+ }
+ # OK, at this point, the caller did not provide the xml via -parse-xml
+ # And now we have a list of REST results. So let's process.
+ if {[dict get $myargs -result-type] eq "REST"} {
+ return [S3::firstif $RESTL $myargs]
+ }
+ set xmlL [list]
+ foreach entry $RESTL {
+ lappend xmlL [dict get $entry outbody]
+ }
+ unset RESTL ; # just to save memory
+ } else {
+ # Well, we've parsed out the XML from the REST,
+ # so we're ready for -parse-xml
+ set xmlL [list [dict get $myargs -parse-xml]]
+ }
+ if {[dict get $myargs -result-type] eq "xml"} {
+ return [S3::firstif $xmlL $myargs]
+ }
+ set pxmlL [list]
+ foreach xml $xmlL {
+ lappend pxmlL [::xsxp::parse $xml]
+ }
+ unset xmlL
+ if {[dict get $myargs -result-type] eq "pxml"} {
+ return [S3::firstif $pxmlL $myargs]
+ }
+ # Here, for result types of "names" and "dict",
+ # we need to actually parse out all the results.
+ if {[dict get $myargs -result-type] eq "names"} {
+ # The easy one.
+ set names [list]
+ foreach pxml $pxmlL {
+ set con0 [::xsxp::only $pxml Contents]
+ set con1 [::xsxp::only $pxml CommonPrefixes]
+ lappend names {*}[concat [::xsxp::fetchall $con0 Key %PCDATA] \
+ [::xsxp::fetchall $con1 Prefix %PCDATA]]
+ }
+ return [lsort $names]
+ } elseif {[dict get $myargs -result-type] eq "dict"} {
+ # The harder one.
+ set last0 [lindex $pxmlL end]
+ set res [dict create]
+ foreach thing {Name Prefix Marker MaxKeys IsTruncated} {
+ dict set res $thing [::xsxp::fetch $last0 $thing %PCDATA?]
+ }
+ dict set res NextMarker $nextmarker0 ; # From way up above.
+ set Prefix [list]
+ set names {Key LastModified ETag Size Owner/ID Owner/DisplayName StorageClass}
+ foreach name $names {set $name [list]}
+ foreach pxml $pxmlL {
+ foreach tag [::xsxp::only $pxml CommonPrefixes] {
+ lappend Prefix [::xsxp::fetch $tag Prefix %PCDATA]
+ }
+ foreach tag [::xsxp::only $pxml Contents] {
+ foreach name $names {
+ lappend $name [::xsxp::fetch $tag $name %PCDATA]
+ }
+ }
+ }
+ dict set res CommonPrefixes/Prefix $Prefix
+ foreach name $names {dict set res $name [set $name]}
+ return $res
+ } else {
+ # The hardest one ;-)
+ error "GetBucket Invalid result type, must be REST, xml, pxml, names, or dict" "" [list S3 usage -result-type $args]
+ }
+}
+
+# Internal. Compare a resource to a file.
+# Returns 1 if they're different, 0 if they're the same.
+# Note that using If-Modified-Since and/or If-Match,If-None-Match
+# might wind up being more efficient than pulling the head
+# and checking. However, this allows for slop, checking both
+# the etag and the date, only generating local etag if the
+# date and length indicate they're the same, and so on.
+# Direction is G or P for Get or Put.
+# Assumes the source always exists. Obviously, Get and Put will throw if not,
+# but not because of this.
+proc S3::compare {myargs direction} {
+ variable config
+ global errorInfo
+ set compare [dict get $myargs -compare]
+ if {$compare ni {always never exists missing newer date checksum different}} {
+ error "-compare must be always, never, exists, missing, newer, date, checksum, or different" "" \
+ [list S3 usage -compare $myargs]
+ }
+ if {"never" eq $compare} {return 0}
+ if {"always" eq $compare} {return 1}
+ if {[dict exists $myargs -file] && [file exists [dict get $myargs -file]]} {
+ set local_exists 1
+ } else {
+ set local_exists 0
+ }
+ # Avoid hitting S3 if we don't need to.
+ if {$direction eq "G" && "exists" eq $compare} {return $local_exists}
+ if {$direction eq "G" && "missing" eq $compare} {
+ return [expr !$local_exists]
+ }
+ # We need to get the headers from the resource.
+ set req [dict create \
+ resource /[dict get $myargs -bucket]/[dict get $myargs -resource] \
+ verb HEAD ]
+ set res [S3::maybebackground $req $myargs]
+ set httpstatus [dict get $res httpstatus]
+ if {"404" eq $httpstatus} {
+ set remote_exists 0
+ } elseif {[string match "2??" $httpstatus]} {
+ set remote_exists 1
+ } else {
+ error "S3: Neither 404 or 2xx on conditional compare" "" \
+ [list S3 remote $httpstatus $res]
+ }
+ if {$direction eq "P"} {
+ if {"exists" eq $compare} {return $remote_exists}
+ if {"missing" eq $compare} {return [expr {!$remote_exists}]}
+ if {!$remote_exists} {return 1}
+ } elseif {$direction eq "G"} {
+ # Actually already handled above, but it never hurts...
+ if {"exists" eq $compare} {return $local_exists}
+ if {"missing" eq $compare} {return [expr {!$local_exists}]}
+ }
+ set outheaders [dict get $res outheaders]
+ if {[dict exists $outheaders content-length]} {
+ set remote_length [dict get $outheaders content-length]
+ } else {
+ set remote_length -1
+ }
+ if {[dict exists $outheaders etag]} {
+ set remote_etag [string tolower \
+ [string trim [dict get $outheaders etag] \"]]
+ } else {
+ set remote_etag "YYY"
+ }
+ if {[dict exists $outheaders last-modified]} {
+ set remote_date [clock scan [dict get $outheaders last-modified]]
+ } else {
+ set remote_date -1
+ }
+ if {[dict exists $myargs -content]} {
+ # Probably should work this out better...
+ #set local_length [string length [encoding convert-to utf-8 \
+ #[dict get $myargs -content]]]
+ set local_length [string length [dict get $myargs -content]]
+ } elseif {$local_exists} {
+ if {[catch {file size [dict get $myargs -file]} local_length]} {
+ error "S3: Couldn't stat [dict get $myargs -file]" "" \
+ [list S3 local $errorInfo]
+ }
+ } else {
+ set local_length -2
+ }
+ if {[dict exists $myargs -content]} {
+ set local_date [clock seconds]
+ } elseif {$local_exists} {
+ set local_date [file mtime [dict get $myargs -file]]
+ # Shouldn't throw, since [file size] worked.
+ } else {
+ set local_date -2
+ }
+ if {$direction eq "P"} {
+ if {"newer" eq $compare} {
+ if {$remote_date < $local_date - [dict get $config -slop-seconds]} {
+ return 1 ; # Yes, local is newer
+ } else {
+ return 0 ; # Older, or the same
+ }
+ }
+ } elseif {$direction eq "G"} {
+ if {"newer" eq $compare} {
+ if {$local_date < $remote_date - [dict get $config -slop-seconds]} {
+ return 1 ; # Yes, remote is later.
+ } else {
+ return 0 ; # Local is older or same.
+ }
+ }
+ }
+ if {[dict get $config -slop-seconds] <= abs($local_date - $remote_date)} {
+ set date_diff 1 ; # Difference is greater
+ } else {
+ set date_diff 0 ; # Difference negligible
+ }
+ if {"date" eq $compare} {return $date_diff}
+ if {"different" eq $compare && [dict exists $myargs -file] && $date_diff} {
+ return 1
+ }
+ # Date's the same, but we're also interested in content, so check the rest
+ # Only others to handle are checksum and different-with-matching-dates
+ if {$local_length != $remote_length} {return 1} ; #easy quick case
+ if {[dict exists $myargs -file] && $local_exists} {
+ if {[catch {
+ # Maybe deal with making this backgroundable too?
+ set local_etag [string tolower \
+ [::md5::md5 -hex -filename [dict get $myargs -file]]]
+ } caught]} {
+ # Maybe you can stat but not read it?
+ error "S3 could not hash file" "" \
+ [list S3 local [dict get $myargs -file] $errorInfo]
+ }
+ } elseif {[dict exists $myargs -content]} {
+ set local_etag [string tolower \
+ [string tolower [::md5::md5 -hex [dict get $myargs -content]]]]
+ } else {
+ set local_etag "XXX"
+ }
+ # puts "local: $local_etag\nremote: $remote_etag"
+ if {$local_etag eq $remote_etag} {return 0} {return 1}
+}
+
+# Internal. Calculates the ACL based on file permissions.
+proc S3::calcacl {myargs} {
+ # How would one work this under Windows, then?
+ # Silly way: invoke [exec cacls $filename],
+ # parse the result looking for Everyone:F or Everyone:R
+ # Messy security if someone replaces the cacls.exe or something.
+ error "S3 Not Yet Implemented" "" [list S3 notyet calcacl $myargs]
+ set result [S3::Configure -default-acl]
+ catch {
+ set chmod [file attributes [dict get $myargs -file] -permissions]
+ set chmod [expr {$chmod & 6}]
+ if {$chmod == 0} {set result private}
+ if {$chmod == 2} {set result public-write}
+ if {$chmod == 6} {set result public-read-write}
+ }
+}
+
+# Public. Put a resource into a bucket.
+proc S3::Put {args} {
+ checkinit
+ set myargs [S3::parseargs1 $args {
+ -bucket -blocking -file -content -resource -acl
+ -content-type -x-amz-meta-* -compare
+ }]
+ if {![dict exists $myargs -bucket]} {
+ dict set myargs -bucket [S3::Configure -default-bucket]
+ }
+ dict set myargs -bucket [string trim [dict get $myargs -bucket] "/ "]
+ if {"" eq [dict get $myargs -bucket]} {
+ error "Put requires -bucket" "" [list S3 usage -bucket $args]
+ }
+ if {![dict exists $myargs -blocking]} {
+ dict set myargs -blocking true
+ }
+ if {![dict exists $myargs -file] && ![dict exists $myargs -content]} {
+ error "Put requires -file or -content" "" [list S3 usage -file $args]
+ }
+ if {[dict exists $myargs -file] && [dict exists $myargs -content]} {
+ error "Put says -file, -content mutually exclusive" "" [list S3 usage -file $args]
+ }
+ if {![dict exists $myargs -resource]} {
+ error "Put requires -resource" "" [list S3 usage -resource $args]
+ }
+ if {![dict exists $myargs -compare]} {
+ dict set myargs -compare [S3::Configure -default-compare]
+ }
+ if {![dict exists $myargs -acl] && "" ne [S3::Configure -default-acl]} {
+ dict set myargs -acl [S3::Configure -default-acl]
+ }
+ if {[dict exists $myargs -file] && \
+ "never" ne [dict get $myargs -compare] && \
+ ![file exists [dict get $myargs -file]]} {
+ error "Put -file doesn't exist: [dict get $myargs -file]" \
+ "" [list S3 usage -file $args]
+ }
+ # Clean up bucket, and take one leading slash (if any) off resource.
+ if {[string match "/*" [dict get $myargs -resource]]} {
+ dict set myargs -resource \
+ [string range [dict get $myargs -resource] 1 end]
+ }
+ # See if we need to copy it.
+ set comp [S3::compare $myargs P]
+ if {!$comp} {return 0} ; # skip it, then.
+
+ # Oookeydookey. At this point, we're actually going to send
+ # the file, so all we need to do is build the request array.
+ set req [dict create verb PUT \
+ resource /[dict get $myargs -bucket]/[dict get $myargs -resource]]
+ if {[dict exists $myargs -file]} {
+ dict set req infile [dict get $myargs -file]
+ } else {
+ dict set req inbody [dict get $myargs -content]
+ }
+ if {[dict exists $myargs -content-type]} {
+ dict set req content-type [dict get $myargs -content-type]
+ }
+ set headers {}
+ foreach xhead [dict keys $myargs -x-amz-meta-*] {
+ dict set headers [string range $xhead 1 end] [dict get $myargs $xhead]
+ }
+ set xmlacl "" ; # For calc and keep
+ if {[dict exists $myargs -acl]} {
+ if {[dict get $myargs -acl] eq "calc"} {
+ # We could make this more complicated by
+ # assigning it to xmlacl after building it.
+ dict set myargs -acl [S3::calcacl $myargs]
+ } elseif {[dict get $myargs -acl] eq "keep"} {
+ dict set myargs -acl [S3::Configure -default-acl]
+ catch {
+ set xmlacl [S3::GetAcl \
+ -bucket [dict get $myargs -bucket] \
+ -resource [dict get $myargs -resource] \
+ -blocking [dict get $myargs -blocking] \
+ -result-type xml]
+ }
+ }
+ dict set headers x-amz-acl [dict get $myargs -acl]
+ }
+ dict set req headers $headers
+ # That should do it.
+ set res [S3::maybebackground $req $myargs]
+ S3::throwhttp $res
+ if {"<" == [string index $xmlacl 0]} {
+ # Set the saved ACL back on the new object
+ S3::PutAcl \
+ -bucket [dict get $myargs -bucket] \
+ -resource [dict get $myargs -resource] \
+ -blocking [dict get $myargs -blocking] \
+ -acl $xmlacl
+ }
+ return 1 ; # Yep, we copied it!
+}
+
+# Public. Get a resource from a bucket.
+proc S3::Get {args} {
+ global errorCode
+ checkinit
+ set myargs [S3::parseargs1 $args {
+ -bucket -blocking -file -content -resource -timestamp
+ -headers -compare
+ }]
+ if {![dict exists $myargs -bucket]} {
+ dict set myargs -bucket [S3::Configure -default-bucket]
+ }
+ dict set myargs -bucket [string trim [dict get $myargs -bucket] "/ "]
+ if {"" eq [dict get $myargs -bucket]} {
+ error "Get requires -bucket" "" [list S3 usage -bucket $args]
+ }
+ if {![dict exists $myargs -file] && ![dict exists $myargs -content]} {
+ error "Get requires -file or -content" "" [list S3 usage -file $args]
+ }
+ if {[dict exists $myargs -file] && [dict exists $myargs -content]} {
+ error "Get says -file, -content mutually exclusive" "" [list S3 usage -file $args]
+ }
+ if {![dict exists $myargs -resource]} {
+ error "Get requires -resource" "" [list S3 usage -resource $args]
+ }
+ if {![dict exists $myargs -compare]} {
+ dict set myargs -compare [S3::Configure -default-compare]
+ }
+ # Clean up bucket, and take one leading slash (if any) off resource.
+ if {[string match "/*" [dict get $myargs -resource]]} {
+ dict set myargs -resource \
+ [string range [dict get $myargs -resource] 1 end]
+ }
+ # See if we need to copy it.
+ if {"never" eq [dict get $myargs -compare]} {return 0}
+ if {[dict exists $myargs -content]} {
+ set comp 1
+ } else {
+ set comp [S3::compare $myargs G]
+ }
+ if {!$comp} {return 0} ; # skip it, then.
+
+ # Oookeydookey. At this point, we're actually going to fetch
+ # the file, so all we need to do is build the request array.
+ set req [dict create verb GET \
+ resource /[dict get $myargs -bucket]/[dict get $myargs -resource]]
+ if {[dict exists $myargs -file]} {
+ set pre_exists [file exists [dict get $myargs -file]]
+ if {[catch {
+ set x [open [dict get $myargs -file] w]
+ fconfigure $x -translation binary -encoding binary
+ } caught]} {
+ error "Get could not create file [dict get $myargs -file]" "" \
+ [list S3 local -file $errorCode]
+ }
+ dict set req outchan $x
+ }
+ # That should do it.
+ set res [S3::maybebackground $req $myargs]
+ if {[dict exists $req outchan]} {
+ catch {close [dict get $req outchan]}
+ if {![string match "2??" [dict get $res httpstatus]] && !$pre_exists} {
+ catch {file delete -force -- [dict get $myargs -file]}
+ }
+ }
+ S3::throwhttp $res
+ if {[dict exists $myargs -headers]} {
+ uplevel 1 \
+ [list set [dict get $myargs -headers] [dict get $res outheaders]]
+ }
+ if {[dict exists $myargs -content]} {
+ uplevel 1 \
+ [list set [dict get $myargs -content] [dict get $res outbody]]
+ }
+ if {[dict exists $myargs -timestamp] && [dict exists $myargs -file]} {
+ if {"aws" eq [dict get $myargs -timestamp]} {
+ catch {
+ set t [dict get $res outheaders last-modified]
+ set t [clock scan $t -gmt true]
+ file mtime [dict get $myargs -file] $t
+ }
+ }
+ }
+ return 1 ; # Yep, we copied it!
+}
+
+# Public. Get information about a resource in a bucket.
+proc S3::Head {args} {
+ global errorCode
+ checkinit
+ set myargs [S3::parseargs1 $args {
+ -bucket -blocking -resource -headers -dict -status
+ }]
+ if {![dict exists $myargs -bucket]} {
+ dict set myargs -bucket [S3::Configure -default-bucket]
+ }
+ dict set myargs -bucket [string trim [dict get $myargs -bucket] "/ "]
+ if {"" eq [dict get $myargs -bucket]} {
+ error "Head requires -bucket" "" [list S3 usage -bucket $args]
+ }
+ if {![dict exists $myargs -resource]} {
+ error "Head requires -resource" "" [list S3 usage -resource $args]
+ }
+ # Clean up bucket, and take one leading slash (if any) off resource.
+ if {[string match "/*" [dict get $myargs -resource]]} {
+ dict set myargs -resource \
+ [string range [dict get $myargs -resource] 1 end]
+ }
+ set req [dict create verb HEAD \
+ resource /[dict get $myargs -bucket]/[dict get $myargs -resource]]
+ set res [S3::maybebackground $req $myargs]
+ if {[dict exists $myargs -dict]} {
+ uplevel 1 \
+ [list set [dict get $myargs -dict] $res]
+ }
+ if {[dict exists $myargs -headers]} {
+ uplevel 1 \
+ [list set [dict get $myargs -headers] [dict get $res outheaders]]
+ }
+ if {[dict exists $myargs -status]} {
+ set x [list [dict get $res httpstatus] [dict get $res httpmessage]]
+ uplevel 1 \
+ [list set [dict get $myargs -status] $x]
+ }
+ return [string match "2??" [dict get $res httpstatus]]
+}
+
+# Public. Get the full ACL from an object and parse it into something useful.
+proc S3::GetAcl {args} {
+ global errorCode
+ checkinit
+ set myargs [S3::parseargs1 $args {
+ -bucket -blocking -resource -result-type -parse-xml
+ }]
+ if {![dict exists $myargs -bucket]} {
+ dict set myargs -bucket [S3::Configure -default-bucket]
+ }
+ dict set myargs -bucket [string trim [dict get $myargs -bucket] "/ "]
+ if {![dict exists $myargs -result-type]} {
+ dict set myargs -result-type "dict"
+ }
+ set restype [dict get $myargs -result-type]
+ if {$restype eq "REST" && [dict exists $myargs -parse-xml]} {
+ error "Do not use REST with -parse-xml" "" \
+ [list S3 usage -parse-xml $args]
+ }
+ if {![dict exists $myargs -parse-xml]} {
+ # We need to fetch the results.
+ if {"" eq [dict get $myargs -bucket]} {
+ error "GetAcl requires -bucket" "" [list S3 usage -bucket $args]
+ }
+ if {![dict exists $myargs -resource]} {
+ error "GetAcl requires -resource" "" [list S3 usage -resource $args]
+ }
+ # Clean up bucket, and take one leading slash (if any) off resource.
+ if {[string match "/*" [dict get $myargs -resource]]} {
+ dict set myargs -resource \
+ [string range [dict get $myargs -resource] 1 end]
+ }
+ set req [dict create verb GET \
+ resource /[dict get $myargs -bucket]/[dict get $myargs -resource] \
+ rtype acl]
+ set dict [S3::maybebackground $req $myargs]
+ if {$restype eq "REST"} {
+ return $dict ; #we're done!
+ }
+ S3::throwhttp $dict ; #make sure it worked.
+ set xml [dict get $dict outbody]
+ } else {
+ set xml [dict get $myargs -parse-xml]
+ }
+ if {[dict get $myargs -result-type] == "xml"} {
+ return $xml
+ }
+ set pxml [xsxp::parse $xml]
+ if {[dict get $myargs -result-type] == "pxml"} {
+ return $pxml
+ }
+ if {[dict get $myargs -result-type] == "dict"} {
+ array set resdict {}
+ set owner [xsxp::fetch $pxml Owner/ID %PCDATA]
+ set grants [xsxp::fetch $pxml AccessControlList %CHILDREN]
+ foreach grant $grants {
+ set perm [xsxp::fetch $grant Permission %PCDATA]
+ set id ""
+ catch {set id [xsxp::fetch $grant Grantee/ID %PCDATA]}
+ if {$id == ""} {
+ set id [xsxp::fetch $grant Grantee/URI %PCDATA]
+ }
+ lappend resdict($perm) $id
+ }
+ return [dict create owner $owner acl [array get resdict]]
+ }
+ error "GetAcl requires -result-type to be REST, xml, pxml or dict" "" [list S3 usage -result-type $args]
+}
+
+# Make one Grant thingie
+proc S3::engrant {who what} {
+ if {$who == "AuthenticatedUsers" || $who == "AllUsers"} {
+ set who http://acs.amazonaws.com/groups/global/$who
+ }
+ if {-1 != [string first "//" $who]} {
+ set type Group ; set tag URI
+ } elseif {-1 != [string first "@" $who]} {
+ set type AmazonCustomerByEmail ; set tag EmailAddress
+ } else {
+ set type CanonicalUser ; set tag ID
+ }
+ set who [string map {< &lt; > &gt; & &amp;} $who]
+ set what [string toupper $what]
+ set xml "<Grant><Grantee xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:type=\"$type\"><$tag>$who</$tag></Grantee>"
+ append xml "<Permission>$what</Permission></Grant>"
+ return $xml
+}
+
+# Make the owner header
+proc S3::enowner {owner} {
+ return "<AccessControlPolicy xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"><Owner><ID>$owner</ID></Owner><AccessControlList>"
+ return "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<AccessControlPolicy xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"><Owner><ID>$owner</ID></Owner><AccessControlList>"
+}
+
+proc S3::endacl {} {
+ return "</AccessControlList></AccessControlPolicy>\n"
+}
+
+# Public. Set the ACL on an existing object.
+proc S3::PutAcl {args} {
+ global errorCode
+ checkinit
+ set myargs [S3::parseargs1 $args {
+ -bucket -blocking -resource -acl -owner
+ }]
+ if {![dict exists $myargs -bucket]} {
+ dict set myargs -bucket [S3::Configure -default-bucket]
+ }
+ dict set myargs -bucket [string trim [dict get $myargs -bucket] "/ "]
+ if {"" eq [dict get $myargs -bucket]} {
+ error "PutAcl requires -bucket" "" [list S3 usage -bucket $args]
+ }
+ if {![dict exists $myargs -resource]} {
+ error "PutAcl requires -resource" "" [list S3 usage -resource $args]
+ }
+ if {![dict exists $myargs -acl]} {
+ dict set myargs -acl [S3::Configure -default-acl]
+ }
+ dict set myargs -acl [string trim [dict get $myargs -acl]]
+ if {[dict get $myargs -acl] == ""} {
+ dict set myargs -acl [S3::Configure -default-acl]
+ }
+ if {[dict get $myargs -acl] == ""} {
+ error "PutAcl requires -acl" "" [list D3 usage -resource $args]
+ }
+ # Clean up bucket, and take one leading slash (if any) off resource.
+ if {[string match "/*" [dict get $myargs -resource]]} {
+ dict set myargs -resource \
+ [string range [dict get $myargs -resource] 1 end]
+ }
+ # Now, figure out the XML to send.
+ set acl [dict get $myargs -acl]
+ set owner ""
+ if {"<" != [string index $acl 0] && ![dict exists $myargs -owner]} {
+ # Grab the owner off the resource
+ set req [dict create verb GET \
+ resource /[dict get $myargs -bucket]/[dict get $myargs -resource] \
+ rtype acl]
+ set dict [S3::maybebackground $req $myargs]
+ S3::throwhttp $dict ; #make sure it worked.
+ set xml [dict get $dict outbody]
+ set pxml [xsxp::parse $xml]
+ set owner [xsxp::fetch $pxml Owner/ID %PCDATA]
+ }
+ if {[dict exists $myargs -owner]} {
+ set owner [dict get $myargs -owner]
+ }
+ set xml [enowner $owner]
+ if {"" == $acl || "private" == $acl} {
+ append xml [engrant $owner FULL_CONTROL]
+ append xml [endacl]
+ } elseif {"public-read" == $acl} {
+ append xml [engrant $owner FULL_CONTROL]
+ append xml [engrant AllUsers READ]
+ append xml [endacl]
+ } elseif {"public-read-write" == $acl} {
+ append xml [engrant $owner FULL_CONTROL]
+ append xml [engrant AllUsers READ]
+ append xml [engrant AllUsers WRITE]
+ append xml [endacl]
+ } elseif {"authenticated-read" == $acl} {
+ append xml [engrant $owner FULL_CONTROL]
+ append xml [engrant AuthenticatedUsers READ]
+ append xml [endacl]
+ } elseif {"<" == [string index $acl 0]} {
+ set xml $acl
+ } elseif {[llength $acl] % 2 != 0} {
+ error "S3::PutAcl -acl must be xml, private, public-read, public-read-write, authenticated-read, or a dictionary" \
+ "" [list S3 usage -acl $acl]
+ } else {
+ # ACL in permission/ID-list format.
+ if {[dict exists $acl owner] && [dict exists $acl acl]} {
+ set xml [S3::enowner [dict get $acl owner]]
+ set acl [dict get $acl acl]
+ }
+ foreach perm {FULL_CONTROL READ READ_ACP WRITE WRITE_ACP} {
+ if {[dict exists $acl $perm]} {
+ foreach id [dict get $acl $perm] {
+ append xml [engrant $id $perm]
+ }
+ }
+ }
+ append xml [endacl]
+ }
+ set req [dict create verb PUT \
+ resource /[dict get $myargs -bucket]/[dict get $myargs -resource] \
+ inbody $xml \
+ rtype acl]
+ set res [S3::maybebackground $req $myargs]
+ S3::throwhttp $res ; #make sure it worked.
+ return $xml
+}
+
+# Public. Delete a resource from a bucket.
+proc S3::Delete {args} {
+ global errorCode
+ checkinit
+ set myargs [S3::parseargs1 $args {
+ -bucket -blocking -resource -status
+ }]
+ if {![dict exists $myargs -bucket]} {
+ dict set myargs -bucket [S3::Configure -default-bucket]
+ }
+ dict set myargs -bucket [string trim [dict get $myargs -bucket] "/ "]
+ if {"" eq [dict get $myargs -bucket]} {
+ error "Delete requires -bucket" "" [list S3 usage -bucket $args]
+ }
+ if {![dict exists $myargs -resource]} {
+ error "Delete requires -resource" "" [list S3 usage -resource $args]
+ }
+ # Clean up bucket, and take one leading slash (if any) off resource.
+ if {[string match "/*" [dict get $myargs -resource]]} {
+ dict set myargs -resource \
+ [string range [dict get $myargs -resource] 1 end]
+ }
+ set req [dict create verb DELETE \
+ resource /[dict get $myargs -bucket]/[dict get $myargs -resource]]
+ set res [S3::maybebackground $req $myargs]
+ if {[dict exists $myargs -status]} {
+ set x [list [dict get $res httpstatus] [dict get $res httpmessage]]
+ uplevel 1 \
+ [list set [dict get $myargs -status] $x]
+ }
+ return [string match "2??" [dict get $res httpstatus]]
+}
+
+# Some helper routines for Push, Pull, and Sync
+
+# Internal. Filter for fileutil::find.
+proc S3::findfilter {dirs name} {
+ # In particular, skip links, devices, etc.
+ if {$dirs} {
+ return [expr {[file isdirectory $name] || [file isfile $name]}]
+ } else {
+ return [file isfile $name]
+ }
+}
+
+# Internal. Get list of local files, appropriately trimmed.
+proc S3::getLocal {root dirs} {
+ # Thanks to Michael Cleverly for this first line...
+ set base [file normalize [file join [pwd] $root]]
+ if {![string match "*/" $base]} {
+ set base $base/
+ }
+ set files {} ; set bl [string length $base]
+ foreach file [fileutil::find $base [list S3::findfilter $dirs]] {
+ if {[file isdirectory $file]} {
+ lappend files [string range $file $bl end]/
+ } else {
+ lappend files [string range $file $bl end]
+ }
+ }
+ set files [lsort $files]
+ # At this point, $files is a sorted list of all the local files,
+ # with a trailing / on any directories included in the list.
+ return $files
+}
+
+# Internal. Get list of remote resources, appropriately trimmed.
+proc S3::getRemote {bucket prefix blocking} {
+ set prefix [string trim $prefix " /"]
+ if {0 != [string length $prefix]} {append prefix /}
+ set res [S3::GetBucket -bucket $bucket -prefix $prefix \
+ -result-type names -blocking $blocking]
+ set names {} ; set pl [string length $prefix]
+ foreach name $res {
+ lappend names [string range $name $pl end]
+ }
+ return [lsort $names]
+}
+
+# Internal. Create any directories we need to put the file in place.
+proc S3::makeDirs {directory suffix} {
+ set sofar {}
+ set nodes [split $suffix /]
+ set nodes [lrange $nodes 0 end-1]
+ foreach node $nodes {
+ lappend sofar $node
+ set tocheck [file join $directory {*}$sofar]
+ if {![file exists $tocheck]} {
+ catch {file mkdir $tocheck}
+ }
+ }
+}
+
+# Internal. Default progress monitor for push, pull, toss.
+proc S3::ignore {args} {} ; # default progress monitor
+
+# Internal. For development and testing. Progress monitor.
+proc S3::printargs {args} {puts $args} ; # For testing.
+
+# Public. Send a local directory tree to S3.
+proc S3::Push {args} {
+ uplevel #0 package require fileutil
+ global errorCode errorInfo
+ checkinit
+ set myargs [S3::parseargs1 $args {
+ -bucket -blocking -prefix -directory
+ -compare -x-amz-meta-* -acl -delete -error -progress
+ }]
+ if {![dict exists $myargs -bucket]} {
+ dict set myargs -bucket [S3::Configure -default-bucket]
+ }
+ dict set myargs -bucket [string trim [dict get $myargs -bucket] "/ "]
+ if {"" eq [dict get $myargs -bucket]} {
+ error "Push requires -bucket" "" [list S3 usage -bucket $args]
+ }
+ if {![dict exists $myargs -directory]} {
+ error "Push requires -directory" "" [list S3 usage -directory $args]
+ }
+ # Set default values.
+ set defaults "
+ -acl \"[S3::Configure -default-acl]\"
+ -compare [S3::Configure -default-compare]
+ -prefix {} -delete 0 -error continue -progress ::S3::ignore -blocking 1"
+ foreach {key val} $defaults {
+ if {![dict exists $myargs $key]} {dict set myargs $key $val}
+ }
+ # Pull out arguments for convenience
+ foreach i {progress prefix directory bucket blocking} {
+ set $i [dict get $myargs -$i]
+ }
+ set prefix [string trimright $prefix /]
+ set meta [dict filter $myargs key x-amz-meta-*]
+ # We're readdy to roll here.
+ uplevel 1 [list {*}$progress args $myargs]
+ if {[catch {
+ set local [S3::getLocal $directory 0]
+ } caught]} {
+ error "Push could not walk local directory - $caught" \
+ $errorInfo $errorCode
+ }
+ uplevel 1 [list {*}$progress local $local]
+ if {[catch {
+ set remote [S3::getRemote $bucket $prefix $blocking]
+ } caught]} {
+ error "Push could not walk remote directory - $caught" \
+ $errorInfo $errorCode
+ }
+ uplevel 1 [list {*}$progress remote $remote]
+ set result [dict create]
+ set result0 [dict create \
+ filescopied 0 bytescopied 0 compareskipped 0 \
+ errorskipped 0 filesdeleted 0 filesnotdeleted 0]
+ foreach suffix $local {
+ uplevel 1 [list {*}$progress copy $suffix start]
+ set err [catch {
+ S3::Put -bucket $bucket -blocking $blocking \
+ -file [file join $directory $suffix] \
+ -resource $prefix/$suffix \
+ -acl [dict get $myargs -acl] \
+ {*}$meta \
+ -compare [dict get $myargs -compare]} caught]
+ if {$err} {
+ uplevel 1 [list {*}$progress copy $suffix $errorCode]
+ dict incr result0 errorskipped
+ dict set result $suffix $errorCode
+ if {[dict get $myargs -error] eq "throw"} {
+ error "Push failed to Put - $caught" $errorInfo $errorCode
+ } elseif {[dict get $myargs -error] eq "break"} {
+ break
+ }
+ } else {
+ if {$caught} {
+ uplevel 1 [list {*}$progress copy $suffix copied]
+ dict incr result0 filescopied
+ dict incr result0 bytescopied \
+ [file size [file join $directory $suffix]]
+ dict set result $suffix copied
+ } else {
+ uplevel 1 [list {*}$progress copy $suffix skipped]
+ dict incr result0 compareskipped
+ dict set result $suffix skipped
+ }
+ }
+ }
+ # Now do deletes, if so desired
+ if {[dict get $myargs -delete]} {
+ foreach suffix $remote {
+ if {$suffix ni $local} {
+ set err [catch {
+ S3::Delete -bucket $bucket -blocking $blocking \
+ -resource $prefix/$suffix } caught]
+ if {$err} {
+ uplevel 1 [list {*}$progress delete $suffix $errorCode]
+ dict incr result0 filesnotdeleted
+ dict set result $suffix notdeleted
+ } else {
+ uplevel 1 [list {*}$progress delete $suffix {}]
+ dict incr result0 filesdeleted
+ dict set result $suffix deleted
+ }
+ }
+ }
+ }
+ dict set result {} $result0
+ uplevel 1 [list {*}$progress finished $result]
+ return $result
+}
+
+# Public. Fetch a portion of a remote bucket into a local directory tree.
+proc S3::Pull {args} {
+ # This is waaaay to similar to Push for comfort.
+ # Fold it up later.
+ uplevel #0 package require fileutil
+ global errorCode errorInfo
+ checkinit
+ set myargs [S3::parseargs1 $args {
+ -bucket -blocking -prefix -directory
+ -compare -timestamp -delete -error -progress
+ }]
+ if {![dict exists $myargs -bucket]} {
+ dict set myargs -bucket [S3::Configure -default-bucket]
+ }
+ dict set myargs -bucket [string trim [dict get $myargs -bucket] "/ "]
+ if {"" eq [dict get $myargs -bucket]} {
+ error "Pull requires -bucket" "" [list S3 usage -bucket $args]
+ }
+ if {![dict exists $myargs -directory]} {
+ error "Pull requires -directory" "" [list S3 usage -directory $args]
+ }
+ # Set default values.
+ set defaults "
+ -timestamp now
+ -compare [S3::Configure -default-compare]
+ -prefix {} -delete 0 -error continue -progress ::S3::ignore -blocking 1"
+ foreach {key val} $defaults {
+ if {![dict exists $myargs $key]} {dict set myargs $key $val}
+ }
+ # Pull out arguments for convenience
+ foreach i {progress prefix directory bucket blocking} {
+ set $i [dict get $myargs -$i]
+ }
+ set prefix [string trimright $prefix /]
+ # We're readdy to roll here.
+ uplevel 1 [list {*}$progress args $myargs]
+ if {[catch {
+ set local [S3::getLocal $directory 1]
+ } caught]} {
+ error "Pull could not walk local directory - $caught" \
+ $errorInfo $errorCode
+ }
+ uplevel 1 [list {*}$progress local $local]
+ if {[catch {
+ set remote [S3::getRemote $bucket $prefix $blocking]
+ } caught]} {
+ error "Pull could not walk remote directory - $caught" \
+ $errorInfo $errorCode
+ }
+ uplevel 1 [list {*}$progress remote $remote]
+ set result [dict create]
+ set result0 [dict create \
+ filescopied 0 bytescopied 0 compareskipped 0 \
+ errorskipped 0 filesdeleted 0 filesnotdeleted 0]
+ foreach suffix $remote {
+ uplevel 1 [list {*}$progress copy $suffix start]
+ set err [catch {
+ S3::makeDirs $directory $suffix
+ S3::Get -bucket $bucket -blocking $blocking \
+ -file [file join $directory $suffix] \
+ -resource $prefix/$suffix \
+ -timestamp [dict get $myargs -timestamp] \
+ -compare [dict get $myargs -compare]} caught]
+ if {$err} {
+ uplevel 1 [list {*}$progress copy $suffix $errorCode]
+ dict incr result0 errorskipped
+ dict set result $suffix $errorCode
+ if {[dict get $myargs -error] eq "throw"} {
+ error "Pull failed to Get - $caught" $errorInfo $errorCode
+ } elseif {[dict get $myargs -error] eq "break"} {
+ break
+ }
+ } else {
+ if {$caught} {
+ uplevel 1 [list {*}$progress copy $suffix copied]
+ dict incr result0 filescopied
+ dict incr result0 bytescopied \
+ [file size [file join $directory $suffix]]
+ dict set result $suffix copied
+ } else {
+ uplevel 1 [list {*}$progress copy $suffix skipped]
+ dict incr result0 compareskipped
+ dict set result $suffix skipped
+ }
+ }
+ }
+ # Now do deletes, if so desired
+ if {[dict get $myargs -delete]} {
+ foreach suffix [lsort -decreasing $local] {
+ # Note, decreasing because we delete empty dirs
+ if {[string match "*/" $suffix]} {
+ set f [file join $directory $suffix]
+ catch {file delete -- $f}
+ if {![file exists $f]} {
+ uplevel 1 [list {*}$progress delete $suffix {}]
+ dict set result $suffix deleted
+ dict incr result0 filesdeleted
+ }
+ } elseif {$suffix ni $remote} {
+ set err [catch {
+ file delete [file join $directory $suffix]
+ } caught]
+ if {$err} {
+ uplevel 1 [list {*}$progress delete $suffix $errorCode]
+ dict incr result0 filesnotdeleted
+ dict set result $suffix notdeleted
+ } else {
+ uplevel 1 [list {*}$progress delete $suffix {}]
+ dict incr result0 filesdeleted
+ dict set result $suffix deleted
+ }
+ }
+ }
+ }
+ dict set result {} $result0
+ uplevel 1 [list {*}$progress finished $result]
+ return $result
+}
+
+# Public. Delete a collection of resources with the same prefix.
+proc S3::Toss {args} {
+ # This is waaaay to similar to Push for comfort.
+ # Fold it up later.
+ global errorCode errorInfo
+ checkinit
+ set myargs [S3::parseargs1 $args {
+ -bucket -blocking -prefix
+ -error -progress
+ }]
+ if {![dict exists $myargs -bucket]} {
+ dict set myargs -bucket [S3::Configure -default-bucket]
+ }
+ dict set myargs -bucket [string trim [dict get $myargs -bucket] "/ "]
+ if {"" eq [dict get $myargs -bucket]} {
+ error "Toss requires -bucket" "" [list S3 usage -bucket $args]
+ }
+ if {![dict exists $myargs -prefix]} {
+ error "Toss requires -prefix" "" [list S3 usage -directory $args]
+ }
+ # Set default values.
+ set defaults "-error continue -progress ::S3::ignore -blocking 1"
+ foreach {key val} $defaults {
+ if {![dict exists $myargs $key]} {dict set myargs $key $val}
+ }
+ # Pull out arguments for convenience
+ foreach i {progress prefix bucket blocking} {
+ set $i [dict get $myargs -$i]
+ }
+ set prefix [string trimright $prefix /]
+ # We're readdy to roll here.
+ uplevel 1 [list {*}$progress args $myargs]
+ if {[catch {
+ set remote [S3::getRemote $bucket $prefix $blocking]
+ } caught]} {
+ error "Toss could not walk remote bucket - $caught" \
+ $errorInfo $errorCode
+ }
+ uplevel 1 [list {*}$progress remote $remote]
+ set result [dict create]
+ set result0 [dict create \
+ filescopied 0 bytescopied 0 compareskipped 0 \
+ errorskipped 0 filesdeleted 0 filesnotdeleted 0]
+ # Now do deletes
+ foreach suffix $remote {
+ set err [catch {
+ S3::Delete -bucket $bucket -blocking $blocking \
+ -resource $prefix/$suffix } caught]
+ if {$err} {
+ uplevel 1 [list {*}$progress delete $suffix $errorCode]
+ dict incr result0 filesnotdeleted
+ dict set result $suffix notdeleted
+ } else {
+ uplevel 1 [list {*}$progress delete $suffix {}]
+ dict incr result0 filesdeleted
+ dict set result $suffix deleted
+ }
+ }
+ dict set result {} $result0
+ uplevel 1 [list {*}$progress finished $result]
+ return $result
+}
diff --git a/tcllib/modules/amazon-s3/S3.test b/tcllib/modules/amazon-s3/S3.test
new file mode 100644
index 0000000..b79227a
--- /dev/null
+++ b/tcllib/modules/amazon-s3/S3.test
@@ -0,0 +1,1766 @@
+# -*- tcl -*-
+# S3.test: tests for the S3 access package.
+
+# This file contains a collection of tests for the S3
+# package. Sourcing this file into Tcl runs the tests and generates
+# output for errors. No output means no errors were found.
+
+# Copyright (c) 2006,2008 Darren New. All Rights Reserved.
+# Copyright (c) 2008 Andreas Kupries <andreas_kupries@users.sourceforge.net>
+# (Boilerplate stuff (header, footer))
+# All rights reserved.
+#
+# RCS: @(#) $Id: S3.test,v 1.3 2008/09/04 02:11:12 andreas_kupries Exp $
+
+# -------------------------------------------------------------------------
+
+source [file join \
+ [file dirname [file dirname [file join [pwd] [info script]]]] \
+ devtools testutilities.tcl]
+
+testsNeedTcl 8.5
+testsNeedTcltest 2.0
+
+if {[catch {package require xml}]} {
+ puts " Aborting the tests found in \"[file tail [info script]]\""
+ puts " Requiring xml package, not found."
+ return
+}
+
+support {
+ # Requires xml (TclXML)
+ useLocal xsxp.tcl xsxp
+}
+testing {
+ useLocal S3.tcl S3
+}
+
+# -------------------------------------------------------------------------
+
+# I normally leave BucketDeletion false, because Amazon gets cranky
+# if you delete a bucket and then try to recreate it any time soon.
+
+# This may clobber files starting with the characers "S3T". Don't
+# run it in a directory with such files you want.
+
+# Put your own keys in S3-test.config.
+
+tcltest::customMatch S3err S3ErrorMatch
+
+tcltest::testConstraint BucketDeletion false
+tcltest::testConstraint REST true
+tcltest::testConstraint BucketIO true
+tcltest::testConstraint ItemIO true
+tcltest::testConstraint Put true
+tcltest::testConstraint Get true
+tcltest::testConstraint Acl true
+tcltest::testConstraint Head true
+tcltest::testConstraint Directory true
+tcltest::testConstraint Delete true
+
+tcltest::configure -verbose {body error pass skip start}
+tcltest::configure -debug 1
+
+# Allow easy testing of S3-style errorCode returns.
+
+proc S3expectErr {code} {
+ global errorCode
+ set errorCode {}
+ set x [catch $code result]
+ return [concat $x $errorCode]
+}
+
+proc S3ErrorMatch {expected actual} {
+ if {$expected eq [lrange $actual 0 [expr {[llength $expected]-1}]]} {
+ return true
+ } else {
+ return false
+ }
+}
+
+# Allow easy testing of background tasks.
+
+proc S3expectBackgroundREST {req} {
+ # Might be done better, tho...
+ set ::S3::afterResult {}
+ set ::S3::afterRan 0
+ set y [after 1 {set ::S3::afterRan 1}]
+ S3::REST $req
+ vwait [dict get $req resultvar]
+ set x [set [dict get $req resultvar]]
+ after cancel $y
+ #if {$::S3::afterResult eq "AFTER-FAILURE"} {
+ #error "Background task never returned value" "" [after info $x]
+ #}
+ if {[string match "BGERROR*" $::S3::afterResult]} {
+ error "BGError triggered: $::S3::afterResult" "" $::S3::afterResult
+ }
+ if {0 == $::S3::afterRan} {
+ error "Concurrent events did not run" "" "S3 test afterRan"
+ }
+ return $x
+}
+
+proc S3expectBackground {code} {
+ # Might be done better, tho...
+ set ::S3::afterResult {}
+ set ::S3::afterRan 0
+ set y [after 1 {set ::S3::afterRan 1}]
+ set x [eval $code]
+ after cancel $y
+ #if {$::S3::afterResult eq "AFTER-FAILURE"} {
+ #error "Background task never returned value" "" [after info $x]
+ #}
+ if {[string match "BGERROR*" $::S3::afterResult]} {
+ error "BGError triggered: $::S3::afterResult" "" $::S3::afterResult
+ }
+ if {0 == $::S3::afterRan} {
+ error "Concurrent events did not run" "" "S3 test afterRan"
+ }
+ return $x
+}
+
+proc bgerror {args} {set ::S3::afterResult [list "BGERROR" $args $::errorInfo]}
+
+# Allow easy incorporation of user's AccessID and SecretKey
+
+proc S3loadKeys {} {
+ source test-S3.config
+}
+
+namespace import ::tcltest::test
+
+proc CleanUpBuckets {{buckets 0}} {
+ S3loadKeys
+ set bucket [S3::SuggestBucket TclTestS3b]
+ for {set i 0} {$i < 25} {incr i} {
+ puts "Deleting $i of 25"
+ for {set j 0} {$j < 10} {incr j} {
+ set q [format %02d $i]
+ set d [S3::REST \
+ [dict create verb DELETE resource /$bucket/thing/$q/$j]]
+ S3::throwhttp $d
+ }
+ }
+ S3::REST [dict create verb DELETE resource /$bucket/fred ]
+ S3::REST [dict create verb DELETE resource /$bucket/barney ]
+ S3::REST [dict create verb DELETE resource /$bucket/wilma ]
+ S3::REST [dict create verb DELETE resource /$bucket/betty ]
+ S3::REST [dict create verb DELETE resource /$bucket/cartman ]
+ S3::REST [dict create verb DELETE resource /$bucket/cartoon/tweety ]
+ S3::REST [dict create verb DELETE resource /$bucket/cartoon/sylvester ]
+ S3::REST [dict create verb DELETE resource "/$bucket/cartoon/road runner" ]
+ S3::REST [dict create verb DELETE \
+ resource "/$bucket/cartoon/wile e. coyote" ]
+ if {$buckets} {S3::REST [dict create verb DELETE resource /$bucket]}
+}
+
+# CleanUpBuckets 0 ; exit
+
+# Test URL encoding
+
+test S3-1.10 {URL encoding no parameters} -body {
+ S3::to_url /quotes/nelson {}
+} -result {/quotes/nelson}
+
+test S3-1.20 {URL encoding with parameters} -body {
+ S3::to_url /quotes/nelson {alpha one beta two}
+} -result {/quotes/nelson?alpha=one&beta=two}
+
+test S3-1.30 {URL encoding with parameters and query} -body {
+ S3::to_url /quotes/nelson?acl {alpha one beta two}
+} -result {/quotes/nelson?acl&alpha=one&beta=two}
+
+test S3-1.40 {URL with non-ASCII characters} -body {
+ set funky "/xyzzy/zz+fun\(\)good?junk space"
+ append funky "&and_utf-8\u2211Sigma\u5927Da"
+ S3::encode_url $funky
+} -result {/xyzzy/zz%2bfun%28%29good%3fjunk%20space%26and_utf-8%e2%88%91Sigma%e5%a4%a7Da}
+
+test S3-1.50 {Check out content types A} -setup {
+ tcltest::makeFile "This is just text" "S3junk.txt"
+} -body {
+ S3::contenttype S3junk.txt
+} -cleanup {
+ tcltest::removeFile "S3junk.txt"
+} -result "text/plain"
+
+test S3-1.60 {Check out content types A} -body {
+ # May be unhappy under UNIX?
+ S3::contenttype origT1.jpg
+} -result "image/jpeg"
+
+test S3-2.10 {Config no args} -body {
+ array set x [S3::Configure]
+ foreach key [lsort [array names x]] {
+ puts $key ; puts $x($key)
+ }
+} -cleanup {unset x} -output "-accesskeyid\n\n-bucket-prefix\nTclS3\n-default-acl\n\n-default-bucket\n\n-default-compare\nalways\n-default-separator\n/\n-reset\nfalse\n-retries\n3\n-secretaccesskey\n\n-service-access-point\ns3.amazonaws.com\n-slop-seconds\n3\n-use-tls\nfalse\n"
+
+test S3-2.20 {Config, one arg} -body {
+ S3::Configure -bucket-prefix
+} -result {TclS3}
+
+test S3-2.30 {Config, set bucket prefix} -body {
+ S3::Configure -bucket-prefix TclTestS3
+ S3::Configure -bucket-prefix
+} -result {TclTestS3}
+
+test S3-2.40 {Config, bad first argument} -body {
+ S3expectErr {S3::Configure -xyzzy}
+} -result "1 S3 usage -xyzzy" -match S3err
+
+test S3-2.50 {Config, wrong number of pairs} -body {
+ set ::errorCode {}
+ S3::Configure -bucket-prefix TclTestS3
+ set x [catch {S3::Configure -bucket-prefix 1234 -use-tls}]
+ set y [S3::Configure -bucket-prefix]
+ return [concat $x [lrange $::errorCode 0 1] $y]
+} -result {1 S3 usage TclTestS3} -cleanup {unset x ; unset y}
+
+test S3-2.60 {Config, test reset} -body {
+ S3::Configure -bucket-prefix XYZZY -reset true
+ return [S3::Configure -bucket-prefix]
+} -result TclS3
+
+test S3-2.70 {Suggest bucket name} -body {
+ S3::Configure -accesskeyid 44CF9590006BF252F707 \
+ -secretaccesskey OtxrzxIsfpFjA7SwPzILwy8Bw21TLhquhboDYROV
+ set x [S3::SuggestBucket Bloop]
+ return [concat [string match *Bloop* $x] \
+ [string match *44CF9590006BF252F707* $x] \
+ [string match *OtxrzxIsfpFjA7SwPzILwy8Bw21TLhquhboDYROV* $x]]
+} -result {1 1 0}
+
+# Now test the stuff from the manual
+
+test S3-3.10 {First documentation example of AUTH} -body {
+ S3::Configure -accesskeyid 44CF9590006BF252F707 \
+ -secretaccesskey OtxrzxIsfpFjA7SwPzILwy8Bw21TLhquhboDYROV
+ set verb put
+ set resource /quotes/nelson
+ set content-type text/html
+ set headers {
+ date "Thu, 17 Nov 2005 18:49:58 GMT"
+ content-md5 c8fdb181845a4ca6b8fec737b3581d76
+ x-amz-meta-author foo@bar.com
+ x-amz-magic abracadabra
+ }
+ set res [S3::authREST $verb $resource ${content-type} $headers]
+ dict get $res authorization
+} -result {AWS 44CF9590006BF252F707:jZNOcbfWmD/A/f3hSvVzXZjM2HU=}
+
+test S3-3.20 {Second documentation example of AUTH} -body {
+ S3::Configure -accesskeyid 44CF9590006BF252F707 \
+ -secretaccesskey OtxrzxIsfpFjA7SwPzILwy8Bw21TLhquhboDYROV
+ set verb GET
+ set resource /quotes/nelson
+ set headers {
+ date XXXXXXX
+ x-amz-magic abracadabra
+ x-amz-date "Thu, 17 Nov 2005 18:49:58 GMT"
+ }
+ set res [S3::authREST $verb $resource "" $headers]
+ dict get $res authorization
+} -result {AWS 44CF9590006BF252F707:5m+HAmc5JsrgyDelh9+a2dNrzN8=}
+
+test S3-4.10 {REST Blocking list of buckets} -constraints "BucketIO REST" \
+ -setup S3loadKeys -body {
+ set req [dict create verb GET resource /]
+ set res [S3::REST $req]
+ return [list [lsort [dict keys $res]] [dict get $res httpstatus] \
+ [expr {0<[string length [dict get $res outbody]]}]]
+} -result {{httpmessage httpstatus outbody outheaders resource verb} 200 1}
+
+test S3-4.20 {REST Nonblocking list of buckets} -constraints "BucketIO REST" \
+ -setup S3loadKeys -body {
+ set req [dict create verb GET resource / resultvar ::S3RES]
+ set res [S3expectBackgroundREST $req]
+ return [list [lsort [dict keys $res]] [dict get $res httpstatus] \
+ [expr {0<[string length [dict get $res outbody]]}]]
+} -result {{httpmessage httpstatus outbody outheaders resource resultvar verb} 200 1}
+
+test S3-4.30 {REST blocking create bucket} -constraints "BucketIO REST" \
+ -setup S3loadKeys -body {
+ set b "TclTestS3.REST.[S3::Configure -accesskeyid]"
+ set req [dict create verb PUT resource /$b headers {x-amz-acl public-read}]
+ set res [S3::REST $req]
+ return [dict get $res httpstatus]
+} -result 200
+
+test S3-4.40 {REST get bucket acl} -constraints "BucketIO REST" \
+ -setup S3loadKeys -body {
+ set b "TclTestS3.REST.[S3::Configure -accesskeyid]"
+ set req [dict create verb GET resource /$b rtype acl]
+ set res [S3::REST $req]
+ set lookfor {<URI>http://acs.amazonaws.com/groups/global/AllUsers</URI></Grantee><Permission>READ</Permission>}
+ set found [expr {-1 != [string first $lookfor $res]}]
+ return [list $found [dict get $res httpstatus]]
+} -result "1 200"
+
+test S3-4.50 {REST blocking put,get,compare contents} \
+ -constraints "ItemIO REST" \
+ -setup S3loadKeys -body {
+ set body "This is a test. This is only a test.\nHad this been a real emergency, you would be dead.\n"
+ set b "TclTestS3.REST.[S3::Configure -accesskeyid]"
+ set req [dict create verb PUT resource /$b/t1.txt inbody $body \
+ headers {x-amz-acl public-read}]
+ set res [S3::REST $req]
+ set r1 [dict get $res httpstatus]
+ set req [dict create verb GET resource /$b/t1.txt rtype acl]
+ set res [S3::REST $req]
+ set lookfor {<URI>http://acs.amazonaws.com/groups/global/AllUsers</URI></Grantee><Permission>READ</Permission>}
+ set r2 [expr {-1 != [string first $lookfor $res]}]
+ set req [dict create verb GET resource /$b/t1.txt]
+ set res [S3::REST $req]
+ set r3 [string compare $body [dict get $res outbody]]
+ return [list $r1 $r2 $r3]
+} -result "200 1 0"
+
+test S3-4.60 {REST nonblocking put,get,compare contents} \
+ -constraints "ItemIO REST" \
+ -setup S3loadKeys -body {
+ set body "This is a test. This is only a test.\nHad this been a real emergency, you would be dead.\n"
+ set b "TclTestS3.REST.[S3::Configure -accesskeyid]"
+ set req [dict create verb PUT resource /$b/t1.txt inbody $body \
+ headers {x-amz-acl public-read} resultvar ::S3REST]
+ set res [S3expectBackgroundREST $req]
+ set r1 [dict get $res httpstatus]
+ set req [dict create verb GET resource /$b/t1.txt rtype acl resultvar ::S3REST]
+ set res [S3expectBackgroundREST $req]
+ set lookfor {<URI>http://acs.amazonaws.com/groups/global/AllUsers</URI></Grantee><Permission>READ</Permission>}
+ set r2 [expr {-1 != [string first $lookfor $res]}]
+ set req [dict create verb GET resource /$b/t1.txt resultvar ::S3REST]
+ set res [S3expectBackgroundREST $req]
+ set r3 [string compare $body [dict get $res outbody]]
+ return [list $r1 $r2 $r3]
+} -result "200 1 0"
+
+test S3-4.70 {REST blocking put,delete} \
+ -constraints "ItemIO REST" \
+ -setup S3loadKeys -body {
+ set body "This is a test. This is only a test.\nHad this been a real emergency, you would be dead.\n"
+ set b "TclTestS3.REST.[S3::Configure -accesskeyid]"
+ set req [dict create verb PUT resource /$b/t1.txt inbody $body]
+ set res [S3::REST $req]
+ set r1 [dict get $res httpstatus]
+ set req [dict create verb DELETE resource /$b/t1.txt]
+ set res [S3::REST $req]
+ set r2 [dict get $res httpstatus]
+ return [list $r1 $r2]
+} -result "200 204" ; # Delete returns "no content"
+
+test S3-4.80 {REST nonblocking put,delete} \
+ -constraints "ItemIO REST" \
+ -setup S3loadKeys -body {
+ set body "This is a test. This is only a test.\nHad this been a real emergency, you would be dead.\n"
+ set b "TclTestS3.REST.[S3::Configure -accesskeyid]"
+ set req [dict create verb PUT resource /$b/t1.txt inbody $body \
+ resultvar ::S3RES]
+ set res [S3expectBackgroundREST $req]
+ set r1 [dict get $res httpstatus]
+ set req [dict create verb DELETE resource /$b/t1.txt resultvar ::S3RES]
+ set res [S3expectBackgroundREST $req]
+ set r2 [dict get $res httpstatus]
+ return [list $r1 $r2]
+} -result "200 204" ; # Delete returns "no content"
+
+test S3-4.90 {REST blocking put,head,delete} \
+ -constraints "ItemIO REST" \
+ -setup S3loadKeys -body {
+ set body "This is a test. This is only a test.\nHad this been a real emergency, you would be dead.\n"
+ set b "TclTestS3.REST.[S3::Configure -accesskeyid]"
+ set req [dict create verb PUT resource /$b/t1.txt inbody $body]
+ set res [S3::REST $req]
+ set r1 [dict get $res httpstatus]
+ set req [dict create verb HEAD resource /$b/t1.txt]
+ set res [S3::REST $req]
+ set r2 [dict get $res httpstatus]
+ set r3 [string length [dict get $res outbody]]
+ set req [dict create verb DELETE resource /$b/t1.txt]
+ set res [S3::REST $req]
+ set r4 [dict get $res httpstatus]
+ set req [dict create verb HEAD resource /$b/t1.txt]
+ set res [S3::REST $req]
+ set r5 [dict get $res httpstatus]
+ return [list $r1 $r2 $r3 $r4 $r5]
+} -result "200 200 0 204 404"
+
+test S3-4.100 {REST blocking put,head,delete from big body} \
+ -constraints "ItemIO REST" \
+ -setup S3loadKeys -body {
+ set body "0123456789"
+ set body [string repeat $body 50000] ; # Make body 500,000 bytes.
+ set b "TclTestS3.REST.[S3::Configure -accesskeyid]"
+ set req [dict create verb PUT resource /$b/t1.txt inbody $body]
+ set res [S3::REST $req]
+ set r1 [dict get $res httpstatus]
+ set req [dict create verb HEAD resource /$b/t1.txt]
+ set res [S3::REST $req]
+ set r2 [dict get $res httpstatus]
+ set r3 [string length [dict get $res outbody]]
+ set r4 [dict get $res outheaders content-length]
+ set req [dict create verb DELETE resource /$b/t1.txt]
+ set res [S3::REST $req]
+ set r5 [dict get $res httpstatus]
+ set req [dict create verb HEAD resource /$b/t1.txt]
+ set res [S3::REST $req]
+ set r6 [dict get $res httpstatus]
+ return [list $r1 $r2 $r3 $r4 $r5 $r6]
+} -result "200 200 0 500000 204 404"
+
+test S3-4.110 {REST nonblocking put,head,delete from big body} \
+ -constraints "ItemIO REST" \
+ -setup S3loadKeys -body {
+ set body "0123456789"
+ set body [string repeat $body 50000] ; # Make body 500,000 bytes.
+ set b "TclTestS3.REST.[S3::Configure -accesskeyid]"
+ set req [dict create verb PUT resource /$b/t2.txt inbody $body resultvar ::S3RES]
+ set res [S3expectBackgroundREST $req]
+ set r1 [dict get $res httpstatus]
+ set req [dict create verb HEAD resource /$b/t2.txt]
+ set res [S3::REST $req]
+ set r2 [dict get $res httpstatus]
+ set r3 [string length [dict get $res outbody]]
+ set r4 [dict get $res outheaders content-length]
+ set req [dict create verb DELETE resource /$b/t2.txt]
+ set res [S3::REST $req]
+ set r5 [dict get $res httpstatus]
+ set req [dict create verb HEAD resource /$b/t2.txt]
+ set res [S3::REST $req]
+ set r6 [dict get $res httpstatus]
+ return [list $r1 $r2 $r3 $r4 $r5 $r6]
+} -result "200 200 0 500000 204 404"
+
+test S3-4.120 {REST nonblocking put,head,delete from big file} \
+ -constraints "ItemIO REST" \
+ -setup S3loadKeys -body {
+ set body "0123456789"
+ set body [string repeat $body 50000] ; # Make body 500,000 bytes.
+ tcltest::makeFile "XXX" S3Tone.txt
+ set x [open S3Tone.txt w] ; puts -nonewline $x $body ; close $x
+ set b "TclTestS3.REST.[S3::Configure -accesskeyid]"
+ set req [dict create verb PUT resource /$b/t3.txt infile S3Tone.txt resultvar ::S3RES]
+ set res [S3expectBackgroundREST $req]
+ set r1 [dict get $res httpstatus]
+ set req [dict create verb HEAD resource /$b/t3.txt]
+ set res [S3::REST $req]
+ set r2 [dict get $res httpstatus]
+ set r3 [string length [dict get $res outbody]]
+ set r4 [dict get $res outheaders content-length]
+ set req [dict create verb DELETE resource /$b/t3.txt]
+ set res [S3::REST $req]
+ set r5 [dict get $res httpstatus]
+ set req [dict create verb HEAD resource /$b/t3.txt]
+ set res [S3::REST $req]
+ set r6 [dict get $res httpstatus]
+ tcltest::removeFile S3Tone.txt
+ return [list $r1 $r2 $r3 $r4 $r5 $r6]
+} -result "200 200 0 500000 204 404"
+
+test S3-4.130 {REST blocking put,head,delete from big file} \
+ -constraints "ItemIO REST" \
+ -setup S3loadKeys -body {
+ set body "0123456789"
+ set body [string repeat $body 50000] ; # Make body 500,000 bytes.
+ tcltest::makeFile "XXX" S3Tone.txt
+ set x [open S3Tone.txt w] ; puts -nonewline $x $body ; close $x
+ set b "TclTestS3.REST.[S3::Configure -accesskeyid]"
+ set req [dict create verb PUT resource /$b/t3.txt infile S3Tone.txt]
+ set res [S3::REST $req]
+ set r1 [dict get $res httpstatus]
+ set req [dict create verb HEAD resource /$b/t3.txt]
+ set res [S3::REST $req]
+ set r2 [dict get $res httpstatus]
+ set r3 [string length [dict get $res outbody]]
+ set r4 [dict get $res outheaders content-length]
+ set req [dict create verb DELETE resource /$b/t3.txt]
+ set res [S3::REST $req]
+ set r5 [dict get $res httpstatus]
+ set req [dict create verb HEAD resource /$b/t3.txt]
+ set res [S3::REST $req]
+ set r6 [dict get $res httpstatus]
+ tcltest::removeFile S3Tone.txt
+ return [list $r1 $r2 $r3 $r4 $r5 $r6]
+} -result "200 200 0 500000 204 404"
+
+test S3-4.140 {REST nonblocking put,get,delete into file} \
+ -constraints "ItemIO REST" \
+ -setup S3loadKeys -body {
+ set body "0123456789"
+ set body [string repeat $body 50000] ; # Make body 500,000 bytes.
+ set b "TclTestS3.REST.[S3::Configure -accesskeyid]"
+ set req [dict create verb PUT resource /$b/t5.txt inbody $body resultvar ::S3RES]
+ set res [S3expectBackgroundREST $req]
+ set r1 [dict get $res httpstatus]
+ tcltest::makeFile "blah" S3Ttwo.txt
+ set x [open S3Ttwo.txt w] ; fconfigure $x -translation binary -encoding binary
+ set req [dict create verb GET resource /$b/t5.txt outchan $x]
+ set res [S3::REST $req]
+ close $x
+ set r2 [dict get $res httpstatus]
+ set r3 [file size S3Ttwo.txt]
+ tcltest::removeFile S3Ttwo.txt
+ set req [dict create verb DELETE resource /$b/t3.txt]
+ set res [S3::REST $req]
+ set r4 [dict get $res httpstatus]
+ set req [dict create verb HEAD resource /$b/t3.txt]
+ set res [S3::REST $req]
+ set r5 [dict get $res httpstatus]
+ return [list $r1 $r2 $r3 $r4 $r5]
+} -result "200 200 500000 204 404"
+
+test S3-4.150 {REST blocking put,get,delete into file} \
+ -constraints "ItemIO REST" \
+ -setup S3loadKeys -body {
+ set body "0123456789"
+ set body [string repeat $body 50000] ; # Make body 500,000 bytes.
+ set b "TclTestS3.REST.[S3::Configure -accesskeyid]"
+ set req [dict create verb PUT resource /$b/t5.txt inbody $body]
+ set res [S3::REST $req]
+ set r1 [dict get $res httpstatus]
+ tcltest::makeFile "blah" S3Ttwo.txt
+ set x [open S3Ttwo.txt w] ; fconfigure $x -translation binary -encoding binary
+ set req [dict create verb GET resource /$b/t5.txt outchan $x]
+ set res [S3::REST $req]
+ close $x
+ set r2 [dict get $res httpstatus]
+ set r3 [file size S3Ttwo.txt]
+ tcltest::removeFile S3Ttwo.txt
+ set req [dict create verb DELETE resource /$b/t5.txt]
+ set res [S3::REST $req]
+ set r4 [dict get $res httpstatus]
+ set req [dict create verb HEAD resource /$b/t5.txt]
+ set res [S3::REST $req]
+ set r5 [dict get $res httpstatus]
+ return [list $r1 $r2 $r3 $r4 $r5]
+} -result "200 200 500000 204 404"
+
+test S3-4.160 {REST blocking put,get,delete of file with encoded name} \
+ -constraints "ItemIO REST" \
+ -setup S3loadKeys -body {
+ set body "0123456789"
+ set b "TclTestS3.REST.[S3::Configure -accesskeyid]"
+ set funky "/$b/zz+fun\(\)good?junk space"
+ append funky "&and_utf-8\u2211Sigma\u5927Da"
+ set req [dict create verb PUT resource $funky inbody $body]
+ set res [S3::REST $req]
+ set r1 [dict get $res httpstatus]
+ set req [dict create verb GET resource $funky]
+ set res [S3::REST $req]
+ set r2 [dict get $res httpstatus]
+ set req [dict create verb DELETE resource $funky]
+ set res [S3::REST $req]
+ set r3 [dict get $res httpstatus]
+ set req [dict create verb HEAD resource $funky]
+ set res [S3::REST $req]
+ set r4 [dict get $res httpstatus]
+ return [list $r1 $r2 $r3 $r4]
+} -result "200 200 204 404"
+
+test S3-4.170 {REST delete bucket} \
+ -constraints "BucketDeletion REST" \
+ -setup S3loadKeys -body {
+ # Bucket ought to be empty by now.
+ # Of course, if a delete fails for some reason...
+ set b "TclTestS3.REST.[S3::Configure -accesskeyid]"
+ set req [dict create verb PUT resource /$b headers {x-amz-acl public-read}]
+ set res [S3::REST $req]
+ set r1 [dict get $res httpstatus]
+ after 5000 ; # Give AWS a chance to remember it.
+ set req [dict create verb DELETE resource /$b]
+ set res [S3::REST $req]
+ after 5000 ; # Give AWS a chance to remember it.
+ set r2 [dict get $res httpstatus]
+ set req [dict create verb GET resource /$b]
+ set res [S3::REST $req]
+ set r3 [dict get $res httpstatus]
+ return [list $r1 $r2 $r3]
+} -result "200 204 404"
+
+test S3-10.10 {ListAllMyBuckets auth failure} -constraints BucketIO \
+ -body {
+ S3expectErr {
+ S3::Configure -accesskeyid 44CF9590006BF252F707 \
+ -secretaccesskey OtxrzxIsfpFjA7SwPzILwy8Bw21TLhquhboDYROV
+ S3::ListAllMyBuckets
+ }
+} -result "1 S3 remote 403" -match S3err
+
+test S3-10.20 {ListAllMyBuckets usage params} -body {
+ S3expectErr {
+ S3::ListAllMyBuckets -blocking false -parse-xml {} -result-type REST
+ }
+} -result "1 S3 usage -parse-xml" -match S3err
+
+test S3-10.30 {ListAllMyBuckets bad params two} -body {
+ S3expectErr {S3::ListAllMyBuckets -xyz hello}
+} -result "1 S3 usage -xyz" -match S3err
+
+test S3-10.40 {ListAllMyBuckets bad params three} -body {
+ S3expectErr {S3::ListAllMyBuckets -blocking false -parse-xml}
+} -result "1 S3 usage -parse-xml" -match S3err
+
+set testLAMB {<?xml version="1.0" encoding="UTF-8"?>
+<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>9fb13c24488e3d7556693247d5a463c1837c3c8ede28f4094228e6c4eb5d70bd</ID><DisplayName>dnew@san.rr.com</DisplayName></Owner><Buckets><Bucket><Name>darren</Name><CreationDate>2006-10-29T07:04:48.000Z</CreationDate></Bucket><Bucket><Name>darren-test</Name><CreationDate>2006-10-29T07:04:48.000Z</CreationDate></Bucket><Bucket><Name>darren3</Name><CreationDate>2006-10-30T22:45:34.000Z</CreationDate></Bucket></Buckets></ListAllMyBucketsResult>}
+
+test S3-10.50 {ListAllMyBuckets result parsing RAW} -body {
+ S3::ListAllMyBuckets -parse-xml $testLAMB -result-type xml
+} -result $testLAMB
+
+test S3-10.60 {ListAllMyBuckets result parsing REST} -constraints BucketIO -body {
+ set dict [S3::ListAllMyBuckets -result-type REST]
+ dict get $dict httpstatus
+} -result "403"
+
+test S3-10.70 {ListAllMyBuckets result parsing PXML} -body {
+ set pxml [S3::ListAllMyBuckets -result-type pxml -parse-xml $testLAMB]
+ concat [lindex $pxml 0] [llength $pxml]
+} -result "ListAllMyBucketsResult 4"
+
+test S3-10.80 {ListAllMyBuckets result parsing NAMES} -body {
+ # Note these are defined to be alphabetical, so no sorting needed
+ S3::ListAllMyBuckets -result-type names -parse-xml $testLAMB
+} -result "darren darren-test darren3"
+
+test S3-10.90 {ListAllMyBuckets result parsing DICT} -body {
+ set dict [S3::ListAllMyBuckets -result-type dict -parse-xml $testLAMB]
+ puts [llength $dict]
+ puts [dict get $dict Owner/ID]
+ puts [dict get $dict Owner/DisplayName]
+ puts [dict get $dict Bucket/Name]
+ puts [dict get $dict Bucket/Date]
+} -output {8
+9fb13c24488e3d7556693247d5a463c1837c3c8ede28f4094228e6c4eb5d70bd
+dnew@san.rr.com
+darren darren-test darren3
+2006-10-29T07:04:48.000Z 2006-10-29T07:04:48.000Z 2006-10-30T22:45:34.000Z
+}
+
+test S3-10.100 {ListAllMyBuckets result parsing OWNER} -body {
+ S3::ListAllMyBuckets -result-type owner -parse-xml $testLAMB
+} -result {9fb13c24488e3d7556693247d5a463c1837c3c8ede28f4094228e6c4eb5d70bd dnew@san.rr.com}
+
+test S3-10.110 {ListAllMyBuckets result parsing error} -body {
+ S3expectErr [list S3::ListAllMyBuckets -result-type xyzzy \
+ -parse-xml $testLAMB]
+} -result "1 S3 usage -result-type" -match S3err
+
+test S3-10.120 {ListAllMyBuckets result parsing error} -body {
+ S3expectErr {S3::ListAllMyBuckets -result-type xyzzy -parse-xml "<Hello"}
+} -result "1 S3 usage xml" -match S3err
+
+test S3-10.130 {ListAllMyBuckets background good} -constraints BucketIO -body {
+ S3loadKeys
+ set x [S3expectBackground {S3::ListAllMyBuckets -result-type REST -blocking false}]
+ dict get $x httpstatus
+} -result "200"
+
+test S3-10.140 {ListAllMyBuckets background bad} -constraints BucketIO -body {
+ S3loadKeys
+ S3expectErr {
+ S3expectBackground {
+ S3::ListAllMyBuckets -result-type REST -blocking true
+ }
+ }
+} -result "1 S3 test afterRan" -match S3err
+
+test S3-20.10 {PutBucket your own bucket} -constraints BucketIO -body {
+ S3loadKeys
+ set b [S3::SuggestBucket TclTestS3]
+ S3::PutBucket -bucket $b
+}
+
+test S3-20.20 {PutBucket someone else's bucket} -constraints BucketIO -body {
+ S3loadKeys
+ S3expectErr {S3::PutBucket -bucket /test/}
+} -result "1 S3 remote 409" -match S3err
+
+test S3-20.30 {PutBucket background failure} -constraints BucketIO -body {
+ S3loadKeys
+ set b [S3::SuggestBucket TclTestS3]
+ S3expectErr [list S3expectBackground [list S3::PutBucket -bucket $b]]
+} -result "1 S3 test afterRan" -match S3err
+
+test S3-20.40 {PutBucket background success} -constraints BucketIO -body {
+ S3loadKeys
+ set b [S3::SuggestBucket TclTestS3]
+ S3expectBackground [list S3::PutBucket -bucket $b -blocking false]
+}
+
+test S3-20.50 {PutBucket test no acl} -constraints BucketIO -body {
+ S3loadKeys
+ set b [S3::SuggestBucket TclTestS3]
+ S3::PutBucket -bucket $b
+ set d1 [dict create verb GET resource /$b rtype acl]
+ set d2 [S3::REST $d1]
+ set d3 [string first "READ" $d2]
+ return [expr -1 == $d3]
+} -result 1
+
+test S3-20.60 {PutBucket test pubread acl} -constraints BucketIO -body {
+ S3loadKeys
+ set b [S3::SuggestBucket TclTestS3]
+ S3::PutBucket -bucket $b -acl public-read
+ set d1 [dict create verb GET resource /$b rtype acl]
+ set d2 [S3::REST $d1]
+ set d3 [string first "AllUsers" $d2]
+ set d4 [string first "READ" $d2]
+ return [expr 0 < $d3 && $d3 < $d4]
+} -result 1
+
+test S3-20.70 {PutBucket test given overrides default acl} \
+ -constraints BucketIO -body {
+ S3loadKeys
+ set b [S3::SuggestBucket TclTestS3]
+ S3::Configure -default-acl public-read-write
+ S3::PutBucket -bucket $b -acl public-read
+ S3::Configure -reset true
+ S3loadKeys
+ set d1 [dict create verb GET resource /$b rtype acl]
+ set d2 [S3::REST $d1]
+ set d3 [string first "AllUsers" $d2]
+ set d4 [string first "READ" $d2]
+ set d5 [string first "WRITE" $d2]
+ return [expr 0 < $d3 && $d3 < $d4 && $d5 == -1]
+} -result 1
+
+test S3-20.80 {PutBucket test default acl} -constraints BucketIO -body {
+ S3loadKeys
+ set b [S3::SuggestBucket TclTestS3]
+ S3::Configure -default-acl public-read-write
+ S3::PutBucket -bucket $b
+ S3::Configure -reset true
+ S3loadKeys
+ set d1 [dict create verb GET resource /$b rtype acl]
+ set d2 [S3::REST $d1]
+ set d3 [string first "AllUsers" $d2]
+ set d4 [string first "READ" $d2]
+ set d5 [string first "WRITE" $d2]
+ return [expr 0 < $d3 && $d3 < $d4 && $d3 < $d5]
+} -result 1
+
+test S3-30.10 {DeleteBucket error} \
+ -constraints "BucketIO BucketDeletion" -body {
+ S3loadKeys
+ set b [S3::SuggestBucket TclTestS3]
+ after 10000 ; # Wait for amazon to catch up
+ S3expectErr {S3::DeleteBucket}
+} -result "1 S3 usage -bucket" -match S3err
+
+test S3-30.20 {DeleteBucket good} \
+ -constraints "BucketIO BucketDeletion" -body {
+ S3loadKeys
+ set b [S3::SuggestBucket TclTestS3]
+ after 10000 ; # Wait for amazon to catch up
+ set x [S3::DeleteBucket -bucket $b]
+ after 10000 ; # Wait for amazon to catch up
+ return $x
+}
+
+test S3-30.30 {DeleteBucket fails on someone else's bucket} \
+ -constraints "BucketIO BucketDeletion" -body {
+ S3loadKeys
+ set b "test"
+ after 10000 ; # Wait for amazon to catch up
+ S3expectErr [list S3::DeleteBucket -bucket $b]
+} -result "1 S3 remote 403" -match S3err
+
+# Since bucket create/delete is high overhead for Amazon,
+# and it's flakey as well, don't test the background version,
+# since it uses the same code.
+
+# OK, since we need a bucket to test stuff, let's continue on.
+S3loadKeys
+set bucket [S3::SuggestBucket TclTestS3b]
+set req [dict create verb HEAD resource /$bucket]
+set res [S3::REST $req]
+set r1 [dict get $res httpstatus]
+set req [dict create verb HEAD resource /$bucket/fred]
+set res [S3::REST $req]
+set r2 [dict get $res httpstatus]
+if {200 != $r1 || 200 != $r2} {
+ S3::PutBucket -bucket $bucket
+ if {[tcltest::testConstraint Directory]} {
+ for {set i 0} {$i < 25} {incr i} {
+ puts "Creating $i of 25"
+ for {set j 0} {$j < 10} {incr j} {
+ set q [format %02d $i]
+ set d [S3::REST \
+ [dict create verb PUT resource /$bucket/thing/$q/$j \
+ inbody "This is $j inside $i"]]
+ S3::throwhttp $d
+ }
+ }
+ }
+ S3::REST [dict create verb PUT resource /$bucket/fred inbody "Fred"]
+ S3::REST [dict create verb PUT resource /$bucket/barney inbody "Barney"]
+ S3::REST [dict create verb PUT resource /$bucket/wilma inbody "Wilma"]
+ S3::REST [dict create verb PUT resource /$bucket/betty inbody "Betty"]
+ S3::REST [dict create verb PUT resource /$bucket/cartman inbody "Cartman" ]
+ S3::REST [dict create verb PUT resource /$bucket/cartoon/tweety \
+ inbody "Tweety"]
+ S3::REST [dict create verb PUT resource /$bucket/cartoon/sylvester \
+ inbody "Sylvester"]
+ S3::REST [dict create verb PUT resource "/$bucket/cartoon/road runner" \
+ inbody "RoadRunner"]
+ S3::REST [dict create verb PUT resource "/$bucket/cartoon/wile e. coyote" \
+ inbody "Coyote"]
+}
+
+# Note that -result-type REST or xml or pxml without a maxcount all
+# return lists of results of that type, since they don't really merge well.
+test S3-40.10 {GetBucket basic call} -constraints BucketIO -body {
+ set res [S3::GetBucket -bucket $bucket -result-type REST]
+ set x1 [llength $res]
+ set x2 [dict get [lindex $res 0] httpstatus]
+ return "$x1 $x2"
+} -result "1 200"
+
+test S3-40.20 {GetBucket get xml} -constraints BucketIO -body {
+ set res [S3::GetBucket -bucket $bucket -result-type xml]
+ set x1 [llength $res]
+ set x2 [lindex $res 0]
+ set x3 [lindex [::xsxp::parse $x2] 0]
+ return "$x1 $x3"
+} -result "1 ListBucketResult"
+
+test S3-40.30 {GetBucket get pxml} -constraints BucketIO -body {
+ set res [S3::GetBucket -bucket $bucket -result-type pxml]
+ set x1 [llength $res]
+ set x2 [lindex $res 0]
+ set x3 [lindex $x2 0]
+ return "$x1 $x3"
+} -result "1 ListBucketResult"
+
+test S3-40.40 {GetBucket names} -constraints BucketIO -body {
+ set r1 [S3::GetBucket -bucket $bucket -result-type names]
+ set r2 [lsort $r1]
+ set r3 [lsort -unique $r1]
+ return [list [llength $r1] [expr {$r1 eq $r2}] [expr {$r2 eq $r3}]]
+} -result "259 1 1"
+
+test S3-40.50 {GetBucket simple looping} -constraints BucketIO -body {
+ set res [S3::GetBucket -bucket $bucket -result-type REST -TEST 50]
+ return [llength $res]
+} -result "6" ; # 259, 50 at a time.
+
+test S3-40.60 {GetBucket looping, return names} -constraints BucketIO -body {
+ set r1 [S3::GetBucket -bucket $bucket -result-type names -TEST 50]
+ set r2 [lsort $r1]
+ set r3 [lsort -unique $r1]
+ return [list [llength $r1] [expr {$r1 eq $r2}] [expr {$r2 eq $r3}]]
+ return [llength $res]
+} -result "259 1 1"; # Shouldn't see the inners here.
+
+test S3-40.70 {GetBucket looping, return dict} -constraints BucketIO -body {
+ set res [S3::GetBucket -bucket $bucket -result-type dict -TEST 50]
+ set r1 [llength [dict get $res Key]]
+ set r2 [string compare [dict get $res Key] [lsort [dict get $res Key]]]
+ set r3 [llength [dict get $res LastModified]]
+ set r4 [llength [dict get $res ETag]]
+ set r5 [llength [dict get $res Size]]
+ set r6 [llength [dict get $res Owner/ID]]
+ set r7 [llength [dict get $res Owner/DisplayName]]
+ set r8 [llength [dict get $res CommonPrefixes/Prefix]]
+ return "$r1 $r2 $r3 $r4 $r5 $r6 $r7 $r8"
+} -result "259 0 259 259 259 259 259 0"
+
+test S3-40.80 {GetBucket non-looping, return dict} -constraints BucketIO -body {
+ set res [S3::GetBucket -bucket $bucket -result-type dict]
+ set r1 [llength [dict get $res Key]]
+ set r2 [string compare [dict get $res Key] [lsort [dict get $res Key]]]
+ set r3 [llength [dict get $res LastModified]]
+ set r4 [llength [dict get $res ETag]]
+ set r5 [llength [dict get $res Size]]
+ set r6 [llength [dict get $res Owner/ID]]
+ set r7 [llength [dict get $res Owner/DisplayName]]
+ set r8 [llength [dict get $res CommonPrefixes/Prefix]]
+ return "$r1 $r2 $r3 $r4 $r5 $r6 $r7 $r8"
+} -result "259 0 259 259 259 259 259 0"
+
+test S3-40.90 {GetBucket looping, prefix} -constraints BucketIO -body {
+ set r [S3::GetBucket -bucket $bucket \
+ -result-type names -TEST 50 -prefix "car"]
+ join $r \n
+} -result {cartman
+cartoon/road runner
+cartoon/sylvester
+cartoon/tweety
+cartoon/wile e. coyote}
+
+test S3-40.100 {GetBucket delimiter, prefix} -constraints BucketIO -body {
+ S3::GetBucket -bucket $bucket -result-type names -TEST 50 \
+ -prefix /thing/ -delimiter /
+} -result {thing/00/ thing/01/ thing/02/ thing/03/ thing/04/ thing/05/ thing/06/ thing/07/ thing/08/ thing/09/ thing/10/ thing/11/ thing/12/ thing/13/ thing/14/ thing/15/ thing/16/ thing/17/ thing/18/ thing/19/ thing/20/ thing/21/ thing/22/ thing/23/ thing/24/}
+
+test S3-40.110 {GetBucket delimiter, prefix again} -constraints BucketIO -body {
+ S3::GetBucket -bucket $bucket -result-type names -TEST 50 \
+ -prefix thing -delimiter /
+} -result {thing/}
+
+test S3-40.120 {GetBucket delimiter, no prefix} -constraints BucketIO -body {
+ S3::GetBucket -bucket $bucket -result-type names -TEST 50 -delimiter /
+} -result {barney betty cartman cartoon/ fred thing/ wilma}
+
+test S3-40.130 {GetBucket no default bucket} -constraints BucketIO -body {
+ S3expectErr {
+ S3::GetBucket -result-type names -TEST 50 -delimiter /
+ }
+} -result "1 S3 usage -bucket" -match S3err
+
+test S3-40.140 {GetBucket with default bucket} -constraints BucketIO -body {
+ S3::Configure -default-bucket $bucket
+ set res [S3::GetBucket -result-type names -TEST 50 -delimiter /]
+ S3::Configure -default-bucket ""
+ return $res
+} -result {barney betty cartman cartoon/ fred thing/ wilma}
+
+set bucket [S3::SuggestBucket TclTestS3] ; # Maybe delete later.
+
+proc getbody {resource} {
+ set req [dict create verb GET resource $resource]
+ set res [S3::REST $req]
+ S3::throwhttp $res
+ set body [dict get $res outbody]
+ return $body
+}
+
+proc delbody {resource} {
+ set req [dict create verb DELETE resource $resource]
+ set res [S3::REST $req]
+ S3::throwhttp $res
+}
+
+proc existsbody {resource} {
+ set req [dict create verb HEAD resource $resource]
+ set res [S3::REST $req]
+ return [expr {[dict get $res httpstatus] eq "200"}]
+}
+
+# Make a setup/cleanup pair for checking constraints on PUT and GET
+set pgsu {
+ # Create an old file, and a new file, with different contents
+ tcltest::makeFile "FILEONE" S3Tone.txt
+ tcltest::makeFile "FILETWO" S3Ttwo.txt
+ tcltest::makeFile "FILETHREE" S3Tthree.txt
+ tcltest::makeFile "This is some random content" S3Talpha.txt
+ tcltest::makeFile "This is some random content" S3Tbeta.txt
+ tcltest::makeFile "This is some random content" S3Tgamma.txt
+ tcltest::makeFile "Junk contents" S3junk.txt
+ set now [clock seconds]
+ file mtime S3Tone.txt [expr $now-300]
+ file mtime S3Ttwo.txt [expr $now+300]
+ file mtime S3Tbeta.txt [expr $now+300]
+ S3::REST [dict create verb PUT resource /$bucket/ABC inbody "ABC HERE" \
+ headers {x-amz-meta-thing stuff} content-type application/tcltest]
+ if {[file exists S3junk.txt]} {file delete S3junk.txt}
+}
+
+set pgcu {
+ tcltest::removeFile S3Tone.txt
+ tcltest::removeFile S3Ttwo.txt
+ tcltest::removeFile S3Tthree.txt
+ tcltest::removeFile S3Talpha.txt
+ tcltest::removeFile S3Tbeta.txt
+ tcltest::removeFile S3Tgamma.txt
+ if {[file exists S3junk.txt]} {file delete S3junk.txt}
+ if {[existsbody /$bucket/XYZ]} {delbody /$bucket/XYZ}
+ if {[existsbody /$bucket/PDQ]} {delbody /$bucket/PDQ}
+ if {[existsbody /$bucket/ABC]} {delbody /$bucket/ABC}
+}
+
+
+test S3-50.10 {Put, basic content} -constraints "Put ItemIO" -body {
+ set c "This is a test\n"
+ set x [S3::Put -bucket $bucket -content $c -resource "XYZ"]
+ set y [getbody /$bucket/XYZ]
+ set z [expr {$y eq $c}]
+ return "$x $z"
+} -cleanup {
+ delbody /$bucket/XYZ
+} -result "1 1"
+
+test S3-50.20 {Put, with a file} -constraints "Put ItemIO" -setup {
+ set c "This is the second test.\nIt is still a test.\n"
+ tcltest::makeFile $c "S3junk.txt"
+} -body {
+ set x [S3::Put -bucket $bucket -file "S3junk.txt" -resource "XYZ"]
+ set y [getbody /$bucket/XYZ]
+ set z [expr {$y eq $c}]
+ return "$x $z"
+} -cleanup {
+ delbody /$bucket/XYZ
+ tcltest::removeFile "S3junk.txt"
+} -result "1 1"
+
+test S3-50.30 {Put with ACL, content-type, meta} \
+ -constraints "Put ItemIO" -setup {
+ set c "This is the third test.\nIt is still a test.\n"
+ tcltest::makeFile $c "S3junk.txt"
+} -body {
+ set x [S3::Put -bucket $bucket -file "S3junk.txt" -resource "XYZ" \
+ -content-type "application/frobulate" -acl "public-read" \
+ -x-amz-meta-one ONE -x-amz-meta-two TWO]
+ set y {} ; set z {}
+ set req [dict create verb GET resource /$bucket/XYZ]
+ set res [S3::REST $req]
+ S3::throwhttp $res
+ set headers [dict get $res outheaders]
+ set y [dict get $headers content-type]
+ set w1 [dict get $headers x-amz-meta-one]
+ set w2 [dict get $headers x-amz-meta-two]
+
+ set d1 [dict create verb GET resource /$bucket/XYZ rtype acl]
+ set d2 [S3::REST $d1]
+ set d3 [string first "AllUsers" $d2]
+ set d4 [string first "READ" $d2]
+ set z [expr 0 < $d3 && $d3 < $d4]
+ return [list $x $y $z $w1 $w2]
+} -cleanup {
+ delbody /$bucket/XYZ
+ tcltest::removeFile "S3junk.txt"
+} -result "1 application/frobulate 1 ONE TWO"
+
+test S3-50.40 {Put -compare never} -constraints "Put ItemIO" -body {
+ set x [S3::Put -file S3junk.txt -bucket $bucket -resource "XYZ" \
+ -compare never]
+ set y [existsbody /$bucket/XYZ]
+ return "$x $y"
+} -cleanup {
+ if {[existsbody /$bucket/XYZ]} {delbody /$bucket/XYZ}
+} -result "0 0"
+
+test S3-50.50 {Put -compare always} \
+ -setup $pgsu -cleanup $pgcu -constraints "Put ItemIO" -body {
+ set x [S3::Put -file S3Talpha.txt -bucket $bucket -resource "XYZ" \
+ -compare always]
+ set y [existsbody /$bucket/XYZ]
+ set z [S3::Put -file S3Talpha.txt -bucket $bucket -resource "XYZ" \
+ -compare always]
+ return "$x $y $z"
+} -result "1 1 1"
+
+test S3-50.60 {Put -compare exists} \
+ -setup $pgsu -cleanup $pgcu -constraints "Put ItemIO" -body {
+ set x1 [S3::Put -file S3Tone.txt -bucket $bucket -resource "XYZ" \
+ -compare exists]
+ set x2 [existsbody /$bucket/XYZ]
+ S3::Put -file S3Tone.txt -bucket $bucket -resource "XYZ" ; # really make it
+ set y1 [S3::Put -file S3Ttwo.txt -bucket $bucket -resource "XYZ" \
+ -compare exists]
+ set y2 [existsbody /$bucket/XYZ]
+ set y3 [string trim [getbody /$bucket/XYZ]]
+ return [list $x1 $x2 $y1 $y2 $y3]
+} -result "0 0 1 1 FILETWO"
+
+test S3-50.70 {Put -compare missing} \
+ -setup $pgsu -cleanup $pgcu -constraints "Put ItemIO" -body {
+ set x1 [S3::Put -file S3Tone.txt -bucket $bucket -resource "XYZ" \
+ -compare missing]
+ set x2 [existsbody /$bucket/XYZ]
+ set y1 [S3::Put -file S3Ttwo.txt -bucket $bucket -resource "XYZ" \
+ -compare missing]
+ set y2 [existsbody /$bucket/XYZ]
+ set y3 [string trim [getbody /$bucket/XYZ]]
+ return [list $x1 $x2 $y1 $y2 $y3]
+} -result "1 1 0 1 FILEONE"
+
+test S3-50.80 {Put -compare newer} \
+ -setup $pgsu -cleanup $pgcu -constraints "Put ItemIO" -body {
+ # Create the file with the current date
+ S3::Put -file S3Tthree.txt -bucket $bucket -resource "XYZ"
+ # Make sure ONE (old) doesn't overwrite it.
+ set x1 [S3::Put -file S3Tone.txt -bucket $bucket -resource "XYZ" \
+ -compare newer]
+ set x2 [string trim [getbody /$bucket/XYZ]]
+ set y1 [S3::Put -file S3Ttwo.txt -bucket $bucket -resource "XYZ" \
+ -compare newer]
+ set y2 [string trim [getbody /$bucket/XYZ]]
+ return [list $x1 $x2 $y1 $y2]
+} -result "0 FILETHREE 1 FILETWO"
+
+test S3-50.90 {Put -compare date} \
+ -setup $pgsu -cleanup $pgcu -constraints "Put ItemIO" -body {
+ S3::Configure -slop-seconds 60
+ S3::Put -file S3Tthree.txt -bucket $bucket -resource "XYZ"
+ set x1 [S3::Put -file S3Tone.txt -bucket $bucket -resource "XYZ" \
+ -compare date]
+ set x2 [string trim [getbody /$bucket/XYZ]]
+ set y1 [S3::Put -file S3Tthree.txt -bucket $bucket -resource "XYZ" \
+ -compare date]
+ set y2 [string trim [getbody /$bucket/XYZ]]
+ set z1 [S3::Put -file S3Tthree.txt -bucket $bucket -resource "PDQ" \
+ -compare date]
+ set z2 [string trim [getbody /$bucket/PDQ]]
+ return [list $x1 $x2 $y1 $y2 $z1 $z2]
+} -result "1 FILEONE 0 FILEONE 1 FILETHREE"
+
+test S3-50.100 {Put -compare checksum} \
+ -setup $pgsu -cleanup $pgcu -constraints "Put ItemIO" -body {
+ S3::Put -file S3Talpha.txt -bucket $bucket -resource "XYZ"
+ set x1 [S3::Put -file S3Tbeta.txt -bucket $bucket -resource "XYZ" \
+ -compare checksum]
+ set x2 [S3::Put -file S3Tbeta.txt -bucket $bucket -resource "PDQ" \
+ -compare checksum]
+ set x3 [S3::Put -content "This is some random content\n" \
+ -bucket $bucket -resource "XYZ" \
+ -compare checksum]
+ set funky "One\u2211Sigma\u5927Da"
+ S3::Put -content $funky -bucket $bucket -resource "XYZ"
+ set x4 [S3::Put -content $funky -bucket $bucket -resource "XYZ" \
+ -compare checksum]
+ return [list $x1 $x2 $x3 $x4]
+} -result "0 1 0 0"
+
+test S3-50.110 {Put -compare different} \
+ -setup $pgsu -cleanup $pgcu -constraints "Put ItemIO" -body {
+ S3::Configure -slop-seconds 60
+ S3::Put -file S3Talpha.txt -bucket $bucket -resource "XYZ"
+ set x1 [S3::Put -file S3Tbeta.txt -bucket $bucket -resource "XYZ" \
+ -compare different]
+ set x2 [S3::Put -file S3Tgamma.txt -bucket $bucket -resource "XYZ" \
+ -compare different]
+ set x3 [S3::Put -file S3Tthree.txt -bucket $bucket -resource "XYZ" \
+ -compare different]
+ set x4 [string trim [getbody /$bucket/XYZ]]
+ set x5 [S3::Put -content "FILETHREE\n" -bucket $bucket -resource "XYZ" \
+ -compare different]
+ return [list $x1 $x2 $x3 $x4 $x5]
+} -result "1 0 1 FILETHREE 0"
+
+test S3-50.120 {Put -compare error} -constraints "Put ItemIO" -body {
+ S3expectErr [list S3::Put -content "STUFF" \
+ -bucket $bucket -resource "XYZ" \
+ -compare other]
+} -result "1 S3 usage -compare" -match S3err
+
+test S3-50.130 {Put -file nonexistant} -constraints "Put ItemIO" -body {
+ S3expectErr [list S3::Put -file nonexistant.txt \
+ -bucket $bucket -resource "XYZ"]
+} -result "1 S3 usage -file" -match S3err
+
+
+test S3-60.10 {Get, basic content} \
+ -setup $pgsu -cleanup $pgcu -constraints "Get ItemIO" -body {
+ set x [S3::Get -bucket $bucket -content abc -resource "ABC"]
+ set y [getbody /$bucket/ABC]
+ set z [expr {$y eq $abc}]
+ return "$x $z"
+} -result "1 1"
+
+test S3-60.20 {Get, with a file} \
+ -setup $pgsu -cleanup $pgcu -constraints "Get ItemIO" -body {
+ set x [S3::Get -bucket $bucket -file "S3junk.txt" -resource "ABC"]
+ set y [tcltest::viewFile S3junk.txt]
+ set z [expr {$y eq "ABC HERE"}]
+ return "$x $z"
+} -result "1 1"
+
+test S3-60.30 {Get with meta} \
+ -setup $pgsu -cleanup $pgcu -constraints "Get ItemIO" -body {
+ set x [S3::Get -bucket $bucket -file "S3junk.txt" -resource "ABC" \
+ -headers thishead]
+ set y [dict get $thishead content-type]
+ set z [dict get $thishead x-amz-meta-thing]
+ return [list $x $y $z]
+} -result "1 application/tcltest stuff"
+
+test S3-60.40 {Get -compare never} \
+ -setup $pgsu -cleanup $pgcu -constraints "Get ItemIO" -body {
+ set x [S3::Get -file S3junk.txt -bucket $bucket -resource "ABC" \
+ -compare never]
+ set y [file exists S3junk.txt]
+ return "$x $y"
+} -result "0 0"
+
+test S3-60.50 {Get -compare always} \
+ -setup $pgsu -cleanup $pgcu -constraints "Get ItemIO" -body {
+ set x [S3::Get -file S3junk.txt -bucket $bucket -resource "ABC" \
+ -compare always]
+ set y [file exists S3junk.txt]
+ set z [S3::Get -file S3junk.txt -bucket $bucket -resource "ABC" \
+ -compare always]
+ set q [S3::Get -content plover -bucket $bucket -resource "ABC" \
+ -compare always]
+ return "$x $y $z $q $plover"
+} -result "1 1 1 1 ABC HERE"
+
+test S3-60.60 {Get -compare exists} \
+ -setup $pgsu -cleanup $pgcu -constraints "Get ItemIO" -body {
+set x0 [file exists S3junk.txt]
+ set x1 [S3::Get -file S3junk.txt -bucket $bucket -resource "ABC" \
+ -compare exists]
+ set x2 [file exists S3junk.txt]
+ set y1 [S3::Get -file S3Tone.txt -bucket $bucket -resource "ABC" \
+ -compare exists]
+ set y2 [file exists S3Tone.txt]
+ set y3 [tcltest::viewFile S3Tone.txt]
+ return [list $x0 $x1 $x2 $y1 $y2 $y3]
+} -result "0 0 0 1 1 {ABC HERE}"
+
+test S3-60.70 {Get -compare missing} \
+ -setup $pgsu -cleanup $pgcu -constraints "Get ItemIO" -body {
+ set x1 [S3::Get -file S3Tone.txt -bucket $bucket -resource "ABC" \
+ -compare missing]
+ set x2 [file exists S3Tone.txt]
+ set y1 [S3::Get -file S3junk.txt -bucket $bucket -resource "ABC" \
+ -compare missing]
+ set y2 [file exists S3junk.txt]
+ set y3 [tcltest::viewFile S3junk.txt]
+ return [list $x1 $x2 $y1 $y2 $y3]
+} -result "0 1 1 1 {ABC HERE}"
+
+test S3-60.80 {Get -compare newer} \
+ -setup $pgsu -cleanup $pgcu -constraints "Get ItemIO" -body {
+ set x1 [S3::Get -file S3Tone.txt -bucket $bucket -resource "ABC" \
+ -compare newer]
+ set x2 [tcltest::viewFile S3Tone.txt]
+ set y1 [S3::Get -file S3Ttwo.txt -bucket $bucket -resource "ABC" \
+ -compare newer]
+ set y2 [tcltest::viewFile S3Ttwo.txt]
+ set z1 [S3::Get -file S3junk.txt -bucket $bucket -resource "ABC" \
+ -compare newer]
+ set z2 [tcltest::viewFile S3junk.txt]
+ set w1 [S3::Get -content w2 -bucket $bucket -resource "ABC" \
+ -compare newer]
+
+ return [list $x1 $x2 $y1 $y2 $z1 $z2 $w1 $w2]
+} -result "1 {ABC HERE} 0 FILETWO 1 {ABC HERE} 1 {ABC HERE}"
+
+test S3-60.90 {Get -compare date} \
+ -setup $pgsu -cleanup $pgcu -constraints "Get ItemIO" -body {
+ S3::Configure -slop-seconds 60
+ set x1 [S3::Get -file S3Tone.txt -bucket $bucket -resource "ABC" \
+ -compare date]
+ set x2 [tcltest::viewFile S3Tone.txt]
+ set y1 [S3::Get -file S3Ttwo.txt -bucket $bucket -resource "ABC" \
+ -compare date]
+ set y2 [tcltest::viewFile S3Ttwo.txt]
+ set z1 [S3::Get -file S3Tthree.txt -bucket $bucket -resource "ABC" \
+ -compare date]
+ set z2 [tcltest::viewFile S3Tthree.txt]
+ set w1 [S3::Get -content w2 -bucket $bucket -resource "ABC" \
+ -compare date]
+ return [list $x1 $x2 $y1 $y2 $z1 $z2 $w1 $w2]
+} -result "1 {ABC HERE} 1 {ABC HERE} 0 FILETHREE 1 {ABC HERE}"
+
+test S3-60.100 {Get -compare checksum} \
+ -setup $pgsu -cleanup $pgcu -constraints "Get ItemIO" -body {
+ S3::Put -file S3Talpha.txt -bucket $bucket -resource "XYZ"
+ set x1 [S3::Get -file S3Tbeta.txt -bucket $bucket -resource "XYZ" \
+ -compare checksum]
+ set x2 [S3::Get -file S3Tbeta.txt -bucket $bucket -resource "ABC" \
+ -compare checksum]
+ set x3 [tcltest::viewFile S3Tbeta.txt]
+ set x4 [S3::Get -content x5 -bucket $bucket -resource "ABC" \
+ -compare checksum]
+ return [list $x1 $x2 $x3 $x4 $x5]
+} -result "0 1 {ABC HERE} 1 {ABC HERE}"
+
+test S3-60.110 {Get -compare different} \
+ -setup $pgsu -cleanup $pgcu -constraints "Get ItemIO" -body {
+ S3::Configure -slop-seconds 60
+ S3::Put -file S3Talpha.txt -bucket $bucket -resource "XYZ"
+ set x0 [S3::Get -file S3junk.txt -bucket $bucket -resource "XYZ" \
+ -compare different] ; # Yes, file nonexistant
+ set x1 [S3::Get -file S3Talpha.txt -bucket $bucket -resource "XYZ" \
+ -compare different] ; # no, same date, same contents
+ set x2 [S3::Get -file S3Tbeta.txt -bucket $bucket -resource "XYZ" \
+ -compare different] ; # Yes, diff date, same contents.
+ set x3 [S3::Get -file S3Tbeta.txt -bucket $bucket -resource "ABC" \
+ -compare different] ; # Yes, diff contents, same date
+ set x4 [S3::Get -content x5 -bucket $bucket -resource "ABC" \
+ -compare different] ; # Yes, variable
+ set x6 [tcltest::viewFile S3Tbeta.txt]
+ return [list $x0 $x1 $x2 $x3 $x4 $x5 $x6]
+} -result "1 0 1 1 1 {ABC HERE} {ABC HERE}"
+
+test S3-60.120 {Get -compare error} -constraints "Get ItemIO" -body {
+ S3expectErr [list S3::Get -file S3Tone.txt \
+ -bucket $bucket -resource "XYZ" \
+ -compare other]
+} -result "1 S3 usage -compare" -match S3err
+
+test S3-60.130 {Get resource nonexistant, file nonexistant A} \
+ -setup $pgsu -cleanup $pgcu -constraints "Get ItemIO" -body {
+ S3expectErr [list S3::Get -file nonexistant.txt \
+ -bucket $bucket -resource "XYZ"]
+} -result "1 S3 remote 404" -match S3err
+
+test S3-60.131 {Get resource nonexistant, file nonexistant B} \
+ -setup $pgsu -cleanup $pgcu -constraints "Get ItemIO" -body {
+ catch {S3::Get -file nonexistant.txt -bucket $bucket -resource "XYZ"}
+ file exists nonexistant.txt
+} -result "0"
+
+test S3-60.132 {Get resource nonexistant, file existant B} \
+ -setup $pgsu -cleanup $pgcu -constraints "Get ItemIO" -body {
+ S3expectErr [list S3::Get -file S3Talpha.txt \
+ -bucket $bucket -resource "XYZ"]
+} -result "1 S3 remote 404" -match S3err
+
+test S3-60.133 {Get resource nonexistant, file existant A} \
+ -setup $pgsu -cleanup $pgcu -constraints "Get ItemIO" -body {
+ catch {S3::Get -file S3Talpha.txt -bucket $bucket -resource "XYZ"}
+ file exists S3Talpha.txt
+} -result "1"
+
+test S3-60.140 {Get with -timestamp options} \
+ -constraints "Get ItemIO" -body {
+ # This test assumes your clock and amazon's clock are within 10 seconds
+ tcltest::makeFile "RandomJunk" ts1.txt
+ tcltest::makeFile "RandomJunk" ts2.txt
+ after 10000
+ S3::Put -content "More random junk" -bucket $bucket -resource "TIMESTAMP"
+ after 5000
+ set tick [clock seconds]
+ after 5000
+ S3::Get -file ts1.txt -timestamp aws -bucket $bucket -resource "TIMESTAMP"
+ S3::Get -file ts2.txt -timestamp now -bucket $bucket -resource "TIMESTAMP"
+ set x1 [file mtime ts1.txt]
+ set x2 [file mtime ts2.txt]
+ return [list [expr $x1 < $tick] [expr $x2 < $tick]]
+} -cleanup {
+ tcltest::removeFile ts1.txt
+ tcltest::removeFile ts2.txt
+ if {[existsbody /$bucket/TIMESTAMP]} {delbody /$bucket/TIMESTAMP}
+} -result "1 0"
+
+test S3-70.10 {Head, resource exists} \
+ -setup $pgsu -cleanup $pgcu -constraints "Head ItemIO" -body {
+ set x1 [S3::Head -bucket $bucket -resource "ABC" -dict dict \
+ -headers headers -status status]
+ return [list $x1 [dict get $dict httpmessage] [dict exists $headers last-modified] $status]
+} -result "1 OK 1 {200 OK}"
+
+test S3-70.20 {Head, resource does not exist} \
+ -setup $pgsu -cleanup $pgcu -constraints "Head ItemIO" -body {
+ set x1 [S3::Head -bucket $bucket -resource "XYZ" -dict dict \
+ -headers headers -status status]
+ return [list $x1 $status]
+} -result "0 {404 {Not Found}}"
+
+test S3-80.10 {Delete, resource exists} \
+ -setup $pgsu -cleanup $pgcu -constraints "Delete ItemIO" -body {
+ set x1 [S3::Delete -bucket $bucket -resource "ABC" -status status]
+ return [list $x1 $status]
+} -result "1 {204 {No Content}}"
+
+test S3-80.20 {Delete, resource nonexistant} \
+ -setup $pgsu -cleanup $pgcu -constraints "Delete ItemIO" -body {
+ set x1 [S3::Delete -bucket $bucket -resource "XYZ" -status status]
+ return [list $x1 $status]
+} -result "1 {204 {No Content}}"
+
+test S3-80.30 {Delete, resource not mine} \
+ -setup $pgsu -cleanup $pgcu -constraints "Delete ItemIO" -body {
+ # Note that ami.prizecapital.net is also mine, but owned by a client.
+ set x1 [S3::Delete -bucket "ami.prizecapital.net" \
+ -resource "README.txt" -status status]
+ return [list $x1 $status]
+} -result "0 {403 Forbidden}"
+
+test S3-90.10 {GetAcl REST} \
+ -setup $pgsu -cleanup $pgcu -constraints "Acl" -body {
+#set x1 [S3::GetAcl -bucket $bucket -resource "ABC" -result-type xml]
+#puts "\n\n$x1\n\n"
+ set x2 [S3::GetAcl -bucket $bucket -resource "ABC" -result-type REST]
+ return [list [dict get $x2 httpstatus] [string index [dict get $x2 outbody] 0]]
+} -result "200 <"
+
+#test S3-90.11 {GetAcl XML} \
+ #-setup $pgsu -constraints "Zap" -body {
+ #set x2 [S3::GetAcl -bucket $bucket -resource "ABC" -result-type xml]
+ #set x3 [open xyzzy.xml w]
+ #fconfigure $x3 -translation binary -encoding binary
+ #puts $x3 $x2
+ #close $x3
+ #exit
+ #set x2 [S3::PutAcl -bucket $bucket -resource "ABC" -acl \
+ #[string trim [read [open xyzzy.xml]]]]
+ #puts $x2 ; exit
+#} -result 1
+
+test S3-90.20 {GetAcl pxml} \
+ -setup $pgsu -cleanup $pgcu -constraints "Acl" -body {
+ set x2 [S3::GetAcl -bucket $bucket -resource "ABC" -result-type pxml]
+ return [list [lindex $x2 0] [lindex $x2 2 0]]
+} -result "AccessControlPolicy Owner"
+
+test S3-90.30 {GetAcl dict} \
+ -setup $pgsu -cleanup $pgcu -constraints "Acl" -body {
+ set x2 [S3::GetAcl -bucket $bucket -resource "ABC" -result-type dict]
+ set owner [dict get $x2 owner]
+ set acl [dict get $x2 acl]
+ set z1 [dict get $acl FULL_CONTROL]
+ set z2 [expr {$owner == $z1}]
+ return $z2
+} -result "1"
+
+test S3-90.40 {GetAcl -parse-xml} \
+ -constraints "Acl" -body {
+ set xml {<?xml version="1.0" encoding="UTF-8"?>
+<AccessControlPolicy xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>9fb13c24488e3d7556693247d5a463c1837c3c8ede28f4094228e6c4eb5d70bd</ID><DisplayName>dnew@san.rr.com</DisplayName></Owner><AccessControlList><Grant><Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Group"><URI>http://acs.amazonaws.com/groups/global/AllUsers</URI></Grantee><Permission>FULL_CONTROL</Permission></Grant><Grant><Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser"><ID>9fb13c24488e3d7556693247d5a463c1837c3c8ede28f4094228e6c4eb5d70bd</ID><DisplayName>dnew@san.rr.com</DisplayName></Grantee><Permission>FULL_CONTROL</Permission></Grant><Grant><Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser"><ID>a5ee16f393707820a7f2d58631351fe839972d25865f8fc423a754d77523e6d4</ID><DisplayName>darren</DisplayName></Grantee><Permission>READ</Permission></Grant><Grant><Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser"><ID>a1bf9e3c79a243e04e31bf3d1f532aca94646ab917c188831241bf5d575fee92</ID><DisplayName>Darren</DisplayName></Grantee><Permission>WRITE</Permission></Grant></AccessControlList></AccessControlPolicy>}
+ set x2 [S3::GetAcl -parse-xml $xml -result-type dict]
+ return $x2
+} -result "owner 9fb13c24488e3d7556693247d5a463c1837c3c8ede28f4094228e6c4eb5d70bd acl {READ a5ee16f393707820a7f2d58631351fe839972d25865f8fc423a754d77523e6d4 WRITE a1bf9e3c79a243e04e31bf3d1f532aca94646ab917c188831241bf5d575fee92 FULL_CONTROL {http://acs.amazonaws.com/groups/global/AllUsers 9fb13c24488e3d7556693247d5a463c1837c3c8ede28f4094228e6c4eb5d70bd}}"
+
+test S3-90.50 {PutAcl private} \
+ -setup $pgsu -cleanup $pgcu -constraints "Acl" -body {
+ set x1 [S3::PutAcl -bucket $bucket -resource "ABC" -acl private]
+ set x2 [S3::GetAcl -bucket $bucket -resource "ABC" -result-type dict]
+ set x3 [dict get $x2 acl]
+ set x4 [llength $x3]
+ return [list [string range $x1 0 19] $x4 [lindex $x3 0]]
+} -result "<AccessControlPolicy 2 FULL_CONTROL"
+
+test S3-90.60 {PutAcl nonexistant get} \
+ -setup $pgsu -cleanup $pgcu -constraints "Acl" -body {
+ S3expectErr [list S3::PutAcl -bucket $bucket -resource XYZ -acl private]
+} -result "1 S3 remote 404" -match S3err
+
+test S3-90.70 {PutAcl nonexistant put} \
+ -setup $pgsu -cleanup $pgcu -constraints "Acl" -body {
+ set x2 [S3::GetAcl -bucket $bucket -resource "ABC" -result-type dict]
+ set owner [dict get $x2 owner]
+ S3expectErr [list S3::PutAcl -owner $owner \
+ -bucket $bucket -resource XYZ -acl private]
+} -result "1 S3 remote 404" -match S3err
+
+test S3-90.80 {PutAcl from xml} \
+ -setup $pgsu -cleanup $pgcu -constraints "Acl" -body {
+ set x0 [S3::GetAcl -bucket $bucket -resource "ABC" -result-type xml]
+ set x1 [S3::PutAcl -bucket $bucket -resource "ABC" -acl $x0]
+ set x2 [S3::GetAcl -bucket $bucket -resource "ABC" -result-type dict]
+ set x3 [dict get $x2 acl]
+ set x4 [llength $x3]
+ if {"<?xml" == [string range $x1 0 4]} {
+ set x1 [string range $x1 [expr 1+[string first "\n" $x1]] end]
+ }
+ return [list [string range $x1 0 19] $x4 [lindex $x3 0]]
+} -result "<AccessControlPolicy 2 FULL_CONTROL"
+
+test S3-90.90 {PutAcl public} \
+ -setup $pgsu -cleanup $pgcu -constraints "Acl" -body {
+ S3expectErr [list S3::PutAcl -bucket $bucket -resource "ABC" -acl public]
+} -result "1 S3 usage -acl public" -match S3err
+
+test S3-90.100 {PutAcl public-read} \
+ -setup $pgsu -cleanup $pgcu -constraints "Acl" -body {
+ set x1 [S3::PutAcl -bucket $bucket -resource "ABC" -acl public-read]
+ set x2 [S3::GetAcl -bucket $bucket -resource "ABC" -result-type dict]
+ set x3 [dict get $x2 acl]
+ set x4 [llength $x3]
+ set x5 [lsort [dict keys $x3]]
+ return [list [string range $x1 0 19] $x4 $x5]
+} -result "<AccessControlPolicy 4 {FULL_CONTROL READ}"
+
+test S3-90.110 {PutAcl public-read-write} \
+ -setup $pgsu -cleanup $pgcu -constraints "Acl" -body {
+ set x1 [S3::PutAcl -bucket $bucket -resource "ABC" -acl public-read-write]
+ set x2 [S3::GetAcl -bucket $bucket -resource "ABC" -result-type dict]
+ set x3 [dict get $x2 acl]
+ set x4 [llength $x3]
+ set x5 [lsort [dict keys $x3]]
+ return [list [string range $x1 0 19] $x4 $x5]
+} -result "<AccessControlPolicy 6 {FULL_CONTROL READ WRITE}"
+
+test S3-90.120 {PutAcl authenticated-read} \
+ -setup $pgsu -cleanup $pgcu -constraints "Acl" -body {
+ set x1 [S3::PutAcl -bucket $bucket -resource "ABC" -acl authenticated-read]
+ set x2 [S3::GetAcl -bucket $bucket -resource "ABC" -result-type dict]
+ set x3 [dict get $x2 acl]
+ set x4 [llength $x3]
+ set x5 [lsort [dict keys $x3]]
+ return [list [string range $x1 0 19] $x4 $x5]
+} -result "<AccessControlPolicy 4 {FULL_CONTROL READ}"
+
+test S3-90.130 {PutAcl complex} \
+ -setup $pgsu -cleanup $pgcu -constraints "Acl" -body {
+ set dict [dict create \
+ FULL_CONTROL {9fb13c24488e3d7556693247d5a463c1837c3c8ede28f4094228e6c4eb5d70bd AuthenticatedUsers} \
+ WRITE darren@prizecapital.net \
+ READ http://acs.amazonaws.com/groups/global/AllUsers ]
+ set x1 [S3::PutAcl -bucket $bucket -resource "ABC" -acl $dict]
+ set x2 [S3::GetAcl -bucket $bucket -resource "ABC" -result-type dict]
+ set x3 [dict get $x2 acl]
+ set x4 [llength $x3]
+ return [list [string range $x1 0 19] [lsort [dict keys $x3]]]
+} -result "<AccessControlPolicy {FULL_CONTROL READ WRITE}"
+
+test S3-90.140 {Put with keep on existing object} \
+ -setup $pgsu -cleanup $pgcu -constraints "Acl" -body {
+ set dict [dict create \
+ FULL_CONTROL {9fb13c24488e3d7556693247d5a463c1837c3c8ede28f4094228e6c4eb5d70bd AuthenticatedUsers} \
+ WRITE darren@prizecapital.net \
+ READ http://acs.amazonaws.com/groups/global/AllUsers ]
+ set x1 [S3::PutAcl -bucket $bucket -resource "ABC" -acl $dict]
+ S3::Put -bucket $bucket -resource "ABC" -file "S3Tone.txt" -acl keep
+ set x2 [S3::GetAcl -bucket $bucket -resource "ABC" -result-type dict]
+ set x3 [dict get $x2 acl]
+ set x4 [llength $x3]
+ return [list [string range $x1 0 19] [lsort [dict keys $x3]]]
+} -result "<AccessControlPolicy {FULL_CONTROL READ WRITE}"
+
+test S3-90.150 {Put with keep on new object} \
+ -setup $pgsu -cleanup $pgcu -constraints "Acl" -body {
+ S3::Put -bucket $bucket -resource "XYZ" -file "S3Tone.txt" -acl keep
+ set x2 [S3::GetAcl -bucket $bucket -resource "ABC" -result-type dict]
+ set x3 [dict get $x2 acl]
+ set x4 [llength $x3]
+ return [list [string range $x1 0 19] [lsort [dict keys $x3]]]
+} -result "<AccessControlPolicy FULL_CONTROL"
+
+
+test S3-100.10 {Pull} \
+ -setup S3loadKeys -constraints "Directory ItemIO" -body {
+ # I actually tested this manually much more extensively,
+ # but some of the tests are difficult, due to needing to
+ # set up a bunch of directories with different permissions, etc.
+ set bucket [S3::SuggestBucket TclTestS3b]
+ set dir S3Tdir
+ catch {file delete -force -- $dir}
+ file mkdir $dir
+ set res [S3::Pull -bucket $bucket -prefix thing \
+ -directory $dir -compare always -delete true]
+ set r1 [dict get $res {} filescopied]
+ set r2 [dict get $res {} errorskipped]
+ set r3 [dict get $res {} filesdeleted]
+ set r4 [file exists [file join $dir 00/6]]
+ return [list $r1 $r2 $r3 $r4]
+} -cleanup {
+ file delete -force -- $dir
+} -result {250 0 0 1}
+
+test S3-100.20 {Push} \
+ -setup S3loadKeys -constraints "Directory ItemIO" -body {
+ set bucket [S3::SuggestBucket TclTestS3b]
+ set dir S3Tdir
+ catch {file delete -force -- $dir}
+ file mkdir $dir
+ set res [S3::Pull -bucket $bucket -prefix thing \
+ -directory $dir -compare always -delete true]
+ set r1 [dict get $res {} filescopied]
+ set r2 [dict get $res {} errorskipped]
+ set r3 [dict get $res {} filesdeleted]
+ set r4 [file exists [file join $dir 00/6]]
+ # Now the rest of the test... :-)
+ set bucket [S3::SuggestBucket TclTestS3] ; # different bucket
+ set res [S3::Push -bucket $bucket -prefix hither/yon \
+ -directory $dir -compare always -delete true]
+ set r5 [dict get $res {} filescopied]
+ set r6 [dict get $res {} errorskipped]
+ set r7 [dict get $res {} filesdeleted]
+ set res [S3::REST [dict create verb HEAD resource /$bucket/hither/yon/00/6]]
+ set r8 [dict get $res httpstatus]
+ return [list $r1 $r2 $r3 $r4 $r5 $r6 $r7 $r8]
+} -cleanup {
+ file delete -force -- $dir
+ set bucket [S3::SuggestBucket TclTestS3]
+ set names [S3::GetBucket -bucket $bucket -prefix hither/yon \
+ -result-type names]
+ foreach name $names {
+ S3::Delete -bucket $bucket -resource $name
+ }
+} -result {250 0 0 1 250 0 0 200}
+
+test S3-100.30 {Push with deletes and stuff} \
+ -setup S3loadKeys -constraints "Directory ItemIO" -body {
+ set bucket [S3::SuggestBucket TclTestS3b]
+ set dir S3Tdir
+ catch {file delete -force -- $dir}
+ file mkdir $dir
+ set res [S3::Pull -bucket $bucket -prefix thing \
+ -directory $dir -compare always -delete true]
+ set r1 [dict get $res {} filescopied]
+ set r2 [dict get $res {} errorskipped]
+ set r3 [dict get $res {} filesdeleted]
+ set r4 [file exists [file join $dir 00/6]]
+ set bucket [S3::SuggestBucket TclTestS3] ; # different bucket
+ set res [S3::Push -bucket $bucket -prefix hither/yon \
+ -directory $dir -compare always -delete true]
+ set r5 [dict get $res {} filescopied]
+ set r6 [dict get $res {} errorskipped]
+ set r7 [dict get $res {} filesdeleted]
+ set res [S3::REST [dict create verb HEAD resource /$bucket/hither/yon/00/6]]
+ set r8 [dict get $res httpstatus]
+ # Now the rest of the test... :-)
+ file delete -force [file join $dir 03]
+ tcltest::makeFile "xxx" [file join $dir "j1.txt"]
+ tcltest::makeFile "xxx" [file join $dir "j2.txt"]
+ # Sadly, makefile insists on adding newlines
+ set x [open [file join $dir j1.txt] w];puts -nonewline $x "123456";close $x
+ set x [open [file join $dir j2.txt] w];puts -nonewline $x "678901";close $x
+ set res [S3::Push -bucket $bucket -prefix hither/yon \
+ -directory $dir -compare missing -delete true]
+ set r9 [dict get $res {} filescopied]
+ set r10 [dict get $res {} errorskipped]
+ set r11 [dict get $res {} filesdeleted]
+ set r12 [dict get $res {} bytescopied]
+ set res [S3::REST [dict create verb HEAD resource /$bucket/hither/yon/08/7]]
+ set r13 [dict get $res httpstatus]
+ set res [S3::REST [dict create verb HEAD resource /$bucket/hither/yon/j1.txt]]
+ set r14 [dict get $res httpstatus]
+ return [list $r1 $r2 $r3 $r4 $r5 $r6 $r7 $r8 $r9 $r10 $r11 $r12 $r13 $r14]
+} -cleanup {
+ file delete -force -- $dir
+ set bucket [S3::SuggestBucket TclTestS3]
+ set names [S3::GetBucket -bucket $bucket -prefix hither/yon \
+ -result-type names]
+ foreach name $names {
+ S3::Delete -bucket $bucket -resource $name
+ }
+} -result {250 0 0 1 250 0 0 200 2 0 10 12 200 200}
+
+test S3-100.40 {Pull with deletes and stuff} \
+ -setup S3loadKeys -constraints "Directory ItemIO" -body {
+ set bucket [S3::SuggestBucket TclTestS3b]
+ set dir S3Tdir
+ catch {file delete -force -- $dir}
+ file mkdir $dir
+ set res [S3::Pull -bucket $bucket -prefix thing \
+ -directory $dir -compare always -delete true]
+ set r1 [dict get $res {} filescopied]
+ set r2 [dict get $res {} errorskipped]
+ set r3 [dict get $res {} filesdeleted]
+ set r4 [file exists [file join $dir 00/6]]
+ set bucket [S3::SuggestBucket TclTestS3] ; # different bucket
+ set res [S3::Push -bucket $bucket -prefix hither/yon \
+ -directory $dir -compare always -delete true]
+ set r5 [dict get $res {} filescopied]
+ set r6 [dict get $res {} errorskipped]
+ set r7 [dict get $res {} filesdeleted]
+ set res [S3::REST [dict create verb HEAD resource /$bucket/hither/yon/00/6]]
+ set r8 [dict get $res httpstatus]
+ file delete -force [file join $dir 03]
+ tcltest::makeFile "xxx" [file join $dir "j1.txt"]
+ tcltest::makeFile "xxx" [file join $dir "j2.txt"]
+ # Sadly, makefile insists on adding newlines
+ set x [open [file join $dir j1.txt] w];puts -nonewline $x "123456";close $x
+ set x [open [file join $dir j2.txt] w];puts -nonewline $x "678901";close $x
+ set res [S3::Push -bucket $bucket -prefix hither/yon \
+ -directory $dir -compare missing -delete true]
+ set r9 [dict get $res {} filescopied]
+ set r10 [dict get $res {} errorskipped]
+ set r11 [dict get $res {} filesdeleted]
+ set r12 [dict get $res {} bytescopied]
+ set res [S3::REST [dict create verb HEAD resource /$bucket/hither/yon/08/7]]
+ set r13 [dict get $res httpstatus]
+ set res [S3::REST [dict create verb HEAD resource /$bucket/hither/yon/j1.txt]]
+ set r14 [dict get $res httpstatus]
+ # Now the rest of the test... :-)
+ file mkdir [file join $dir ToDelete]
+ set x [open [file join $dir ToDelete T1.txt] w];puts $x "Hello";close $x
+ set x [open [file join $dir ToDelete T2.txt] w];puts $x "World";close $x
+ set bucket [S3::SuggestBucket TclTestS3b]
+ set res [S3::Pull -bucket $bucket -prefix thing \
+ -directory $dir -compare missing -delete true]
+ set r15 [dict get $res {} filescopied] ; # The 03 directory
+ set r16 [dict get $res {} compareskipped] ; # The rest.
+ set r17 [dict get $res {} filesdeleted] ; # j1, j2, T1, T2, ToDelete
+ return [list $r1 $r2 $r3 $r4 $r5 $r6 $r7 $r8 $r9 $r10 $r11 $r12 $r13 $r14 $r15 $r16 $r17]
+} -cleanup {
+ file delete -force -- $dir
+ set bucket [S3::SuggestBucket TclTestS3]
+ set names [S3::GetBucket -bucket $bucket -prefix hither/yon \
+ -result-type names]
+ foreach name $names {
+ S3::Delete -bucket $bucket -resource $name
+ }
+} -result {250 0 0 1 250 0 0 200 2 0 10 12 200 200 10 240 5}
+
+test S3-100.50 {Push and Pull with -compare never -delete true} \
+ -setup S3loadKeys -constraints "Directory ItemIO" -body {
+ # This test creates 00 thru 09 in a bucket and a local dir.
+ # It then deletes 07 from the bucket and 03 locally.
+ # It then pushes and pulls with -compare never -delete true.
+ # It expects 0 files copied and 10/11 deleted.
+ # It then checks the deletes happened.
+ set bucket [S3::SuggestBucket TclTestS3b]
+ set dir S3Tdir
+ catch {file delete -force -- $dir}
+ file mkdir $dir
+ set res [S3::Pull -bucket $bucket -prefix thing \
+ -directory $dir -compare always -delete true]
+ set bucket [S3::SuggestBucket TclTestS3] ; # different bucket
+ set res [S3::Push -bucket $bucket -prefix hither/yon \
+ -directory $dir -compare always -delete true]
+ for {set i 0} {$i <= 9} {incr i} {
+ S3::Delete -bucket $bucket -resource hither/yon/07/$i
+ }
+ file delete -force [file join $dir 03]
+ set res [S3::Push -bucket $bucket -prefix hither/yon \
+ -directory $dir -compare never -delete true]
+ set r1 [dict get $res {} filescopied]
+ set r2 [dict get $res {} errorskipped]
+ set r3 [dict get $res {} filesdeleted]
+ set res [S3::REST [dict create verb HEAD resource /$bucket/hither/yon/03/7]]
+ set r4 [dict get $res httpstatus]
+ set res [S3::Pull -bucket $bucket -prefix hither/yon \
+ -directory $dir -compare never -delete true]
+ set r5 [dict get $res {} filescopied]
+ set r6 [dict get $res {} errorskipped]
+ set r7 [dict get $res {} filesdeleted]
+ set r8 [file exists [file join $dir 07 4]]
+ set r9 [file exists [file join $dir 07]]
+ return [list $r1 $r2 $r3 $r4 $r5 $r6 $r7 $r8 $r9]
+} -cleanup {
+ file delete -force -- $dir
+ set bucket [S3::SuggestBucket TclTestS3]
+ set names [S3::GetBucket -bucket $bucket -prefix hither/yon \
+ -result-type names]
+ foreach name $names {
+ S3::Delete -bucket $bucket -resource $name
+ }
+} -result {0 0 10 404 0 0 11 0 0}
+
+test S3-100.60 {Toss} \
+ -setup S3loadKeys -constraints "Directory ItemIO" -body {
+ set bucket [S3::SuggestBucket TclTestS3b]
+ set dir S3Tdir
+ catch {file delete -force -- $dir}
+ file mkdir $dir
+ set res [S3::Pull -bucket $bucket -prefix thing \
+ -directory $dir -compare missing -delete true]
+ set bucket [S3::SuggestBucket TclTestS3] ; # different bucket
+ set res [S3::Push -bucket $bucket -prefix hither/yon \
+ -directory $dir -compare missing -delete true]
+ set res [S3::Toss -bucket $bucket -prefix /hither]
+ set r1 [dict get $res {} filesdeleted]
+ set r2 [dict get $res {} filesnotdeleted]
+ return [list $r1 $r2]
+} -cleanup {
+ file delete -force -- $dir
+ set bucket [S3::SuggestBucket TclTestS3]
+ set names [S3::GetBucket -bucket $bucket -prefix hither/yon \
+ -result-type names]
+ foreach name $names {
+ S3::Delete -bucket $bucket -resource $name
+ }
+} -result {250 0}
+
+# set res [S3::REST {resource /darren/xyzzyplover verb HEAD}]
+# puts $res\n\n\n ; after 3000
+# set res [S3::REST [list resource /$bucket/fred verb HEAD]]
+# puts $res\n\n\n ; after 3000
+# set res [dict get $res outheaders]
+# set remote_length [dict get $res content-length]
+# set remote_etag [string trim [dict get $res etag] \"]
+# set remote_date [clock scan [dict get $res last-modified]]
+# puts "remote_length=$remote_length"
+# puts "remote_etag=$remote_etag"
+# puts "remote_date=$remote_date"
+# puts "\n\n"
+# set body "ABC\u2211S\u5927D"
+# set res [S3::REST [list resource /darren/plover verb PUT inbody $body]]
+# set res [S3::REST [list resource /darren/plover verb HEAD]]
+# puts $res\n\n\n ; after 3000
+
+CleanUpBuckets [tcltest::testConstraint BucketDeletion]
+
+#----------------------------------------------------------------------
+
+testsuiteCleanup
+puts "(If anything failed, check all test buckets got cleaned up!)"
+puts "Done!" ; after 5000
diff --git a/tcllib/modules/amazon-s3/TODO.txt b/tcllib/modules/amazon-s3/TODO.txt
new file mode 100644
index 0000000..1b4fda2
--- /dev/null
+++ b/tcllib/modules/amazon-s3/TODO.txt
@@ -0,0 +1,20 @@
+STILL TO DO: Implement S3::Acl.
+
+STILL TO DO: Optional argument to Put and Get for compares: remote
+bucket holding the contents you're comparing, so if you do a GetBucket
+you don't have to do Head.
+
+STILL TO DO: Parse headers with multiple lines per header. (Especially
+x-amz-meta-* headers.)
+
+STILL TO DO: Fix Push, Pull, Toss to not be cut-paste development.
+
+STILL TO DO: Add test to check that -compare never -delete true does
+what you would want it to do.
+
+STILL TO DO: Modify S3.tcl to remove xsxp and use TclDOM instead.
+
+STILL TO DO: Add UI, both command-line and graphical.
+
+STILL TO DO: Finish OddJob, a separate application based on S3.
+
diff --git a/tcllib/modules/amazon-s3/pkgIndex.tcl b/tcllib/modules/amazon-s3/pkgIndex.tcl
new file mode 100644
index 0000000..1d4f197
--- /dev/null
+++ b/tcllib/modules/amazon-s3/pkgIndex.tcl
@@ -0,0 +1,9 @@
+# pkgIndex.tcl --
+# Copyright (c) 2006 Darren New
+# This is for the Amazon S3 web service packages.
+
+if {![package vsatisfies [package provide Tcl] 8.5]} {return}
+
+package ifneeded xsxp 1.0 [list source [file join $dir xsxp.tcl]]
+package ifneeded S3 1.0.3 [list source [file join $dir S3.tcl]]
+
diff --git a/tcllib/modules/amazon-s3/test-S3.config b/tcllib/modules/amazon-s3/test-S3.config
new file mode 100644
index 0000000..2dad351
--- /dev/null
+++ b/tcllib/modules/amazon-s3/test-S3.config
@@ -0,0 +1,2 @@
+S3::Configure -accesskeyid use-yours \
+-secretaccesskey put-yours-here
diff --git a/tcllib/modules/amazon-s3/xsxp.man b/tcllib/modules/amazon-s3/xsxp.man
new file mode 100644
index 0000000..3f66da8
--- /dev/null
+++ b/tcllib/modules/amazon-s3/xsxp.man
@@ -0,0 +1,137 @@
+[manpage_begin xsxp n 1.0]
+[keywords dom]
+[keywords parser]
+[keywords xml]
+[moddesc {Amazon S3 Web Service Utilities}]
+[titledesc {eXtremely Simple Xml Parser}]
+[copyright {Copyright 2006 Darren New. All Rights Reserved.}]
+[category {Text processing}]
+[require Tcl 8.4]
+[require xsxp 1]
+[require xml]
+[description]
+This package provides a simple interface to parse XML into a pure-value list.
+It also provides accessor routines to pull out specific subtags,
+not unlike DOM access.
+This package was written for and is used by Darren New's Amazon S3 access package.
+
+[para]
+This is pretty lame, but I needed something like this for S3,
+and at the time, TclDOM would not work with the new 8.5 Tcl
+due to version number problems.
+[para]
+In addition, this is a pure-value implementation. There is no
+garbage to clean up in the event of a thrown error, for example.
+This simplifies the code for sufficiently small XML documents,
+which is what Amazon's S3 guarantees.
+
+[para]
+Copyright 2006 Darren New. All Rights Reserved.
+NO WARRANTIES OF ANY TYPE ARE PROVIDED.
+COPYING OR USE INDEMNIFIES THE AUTHOR IN ALL WAYS.
+This software is licensed under essentially the same
+terms as Tcl. See LICENSE.txt for the terms.
+
+[section COMMANDS]
+The package implements five rather simple procedures.
+One parses, one is for debugging, and the rest pull various
+parts of the parsed document out for processing.
+
+[list_begin definitions]
+
+[call [cmd xsxp::parse] [arg xml]]
+
+This parses an XML document (using the standard xml tcllib module in a SAX sort of way) and builds a data structure which it returns if the parsing succeeded. The return value is referred to herein as a "pxml", or "parsed xml". The list consists of two or more elements:
+
+[list_begin itemized]
+[item]
+The first element is the name of the tag.
+[item]
+The second element is an array-get formatted list of key/value pairs. The keys are attribute names and the values are attribute values. This is an empty list if there are no attributes on the tag.
+[item]
+The third through end elements are the children of the node, if any. Each child is, recursively, a pxml.
+[item]
+Note that if the zero'th element, i.e. the tag name, is "%PCDATA", then
+the attributes will be empty and the third element will be the text of the element. In addition, if an element's contents consists only of PCDATA, it will have only one child, and all the PCDATA will be concatenated. In other words,
+this parser works poorly for XML with elements that contain both child tags and PCDATA. Since Amazon S3 does not do this (and for that matter most
+uses of XML where XML is a poor choice don't do this), this is probably
+not a serious limitation.
+[list_end]
+
+[para]
+
+[call [cmd xsxp::fetch] [arg pxml] [arg path] [opt [arg part]]]
+
+[arg pxml] is a parsed XML, as returned from xsxp::parse.
+[arg path] is a list of element tag names. Each element is the name
+of a child to look up, optionally followed by a
+hash ("#") and a string of digits. An empty list or an initial empty element
+selects [arg pxml]. If no hash sign is present, the behavior is as if "#0"
+had been appended to that element. (In addition to a list, slashes can separate subparts where convenient.)
+
+[para]
+
+An element of [arg path] scans the children at the indicated level
+for the n'th instance of a child whose tag matches the part of the
+element before the hash sign. If an element is simply "#" followed
+by digits, that indexed child is selected, regardless of the tags
+in the children. Hence, an element of "#3" will always select
+the fourth child of the node under consideration.
+
+[para]
+[arg part] defaults to "%ALL". It can be one of the following case-sensitive terms:
+[list_begin definitions]
+[def %ALL] returns the entire selected element.
+[def %TAGNAME] returns lindex 0 of the selected element.
+[def %ATTRIBUTES] returns index 1 of the selected element.
+
+[def %CHILDREN] returns lrange 2 through end of the selected element,
+resulting in a list of elements being returned.
+
+[def %PCDATA] returns a concatenation of all the bodies of
+direct children of this node whose tag is %PCDATA.
+It throws an error if no such children are found. That
+is, part=%PCDATA means return the textual content found
+in that node but not its children nodes.
+
+[def %PCDATA?] is like %PCDATA, but returns an empty string if
+no PCDATA is found.
+
+[list_end]
+
+[para]
+For example, to fetch the first bold text from the fifth paragraph of the body of your HTML file,
+[example {xsxp::fetch $pxml {body p#4 b} %PCDATA}]
+
+[para]
+
+[call [cmd xsxp::fetchall] [arg pxml_list] [arg path] [opt [arg part]]]
+
+This iterates over each PXML in [arg pxml_list] (which must be a list
+of pxmls) selecting the indicated path from it, building a new list
+with the selected data, and returning that new list.
+
+[para]
+
+For example, [arg pxml_list] might be
+the %CHILDREN of a particular element, and the [arg path] and [arg part]
+might select from each child a sub-element in which we're interested.
+
+[para]
+
+[call [cmd xsxp::only] [arg pxml] [arg tagname]]
+This iterates over the direct children of [arg pxml] and selects only
+those with [arg tagname] as their tag. Returns a list of matching
+elements.
+
+[para]
+
+[call [cmd xsxp::prettyprint] [arg pxml] [opt [arg chan]]]
+This outputs to [arg chan] (default stdout) a pretty-printed
+version of [arg pxml].
+
+[list_end]
+
+[vset CATEGORY amazon-s3]
+[include ../doctools2base/include/feedback.inc]
+[manpage_end]
diff --git a/tcllib/modules/amazon-s3/xsxp.tcl b/tcllib/modules/amazon-s3/xsxp.tcl
new file mode 100644
index 0000000..1fc2042
--- /dev/null
+++ b/tcllib/modules/amazon-s3/xsxp.tcl
@@ -0,0 +1,254 @@
+# xsxp.tcl --
+#
+###Abstract
+# Extremely Simple XML Parser
+#
+# This is pretty lame, but I needed something like this for S3,
+# and at the time, TclDOM would not work with the new 8.5 Tcl
+# due to version number problems.
+#
+# In addition, this is a pure-value implementation. There is no
+# garbage to clean up in the event of a thrown error, for example.
+# This simplifies the code for sufficiently small XML documents,
+# which is what Amazon's S3 guarantees.
+#
+###Copyright
+# Copyright (c) 2006 Darren New.
+# All Rights Reserved.
+# NO WARRANTIES OF ANY TYPE ARE PROVIDED.
+# COPYING OR USE INDEMNIFIES THE AUTHOR IN ALL WAYS.
+# See the license terms in LICENSE.txt
+#
+###Revision String
+# SCCS: %Z% %M% %I% %E% %U%
+
+# xsxp::parse $xml
+# Returns a parsed XML, or PXML. A pxml is a list.
+# The first element is the name of the tag.
+# The second element is a list of name/value pairs of the
+# associated attribues, if any.
+# The third thru final values are recursively PXML values.
+# If the first element (element zero, that is) is "%PCDATA",
+# then the attributes will be emtpy and the third element
+# will be the text of the element.
+
+# xsxp::fetch $pxml $path ?$part?
+# $pxml is a parsed XML, as returned from xsxp::parse.
+# $path is a list of elements. Each element is the name of
+# a child to look up, optionally followed by a hash ("#")
+# and a string of digits. An emtpy list or an initial empty
+# element selects $pxml. If no hash sign is present, the
+# behavior is as if "#0" had been appended to that element.
+# An element of $path scans the children at the indicated
+# level for the n'th instance of a child whose tag matches
+# the part of the element before the hash sign. If an element
+# is simply "#" followed by digits, that indexed child is
+# selected, regardless of the tags in the children. So
+# an element of #3 will always select the fourth child
+# of the node under consideration.
+# $part defaults to %ALL. It can be one of the following:
+# %ALL - returns the entire selected element.
+# %TAGNAME - returns lindex 0 of the selected element.
+# %ATTRIBUTES - returns lindex 1 of the selected element.
+# %CHILDREN - returns lrange 2 through end of the selected element,
+# resulting in a list of elements being returned.
+# %PCDATA - returns a concatenation of all the bodies of
+# direct children of this node whose tag is %PCDATA.
+# Throws an error if no such children are found. That
+# is, part=%PCDATA means return the textual content found
+# in that node but not its children nodes.
+# %PCDATA? - like %PCDATA, but returns an empty string if
+# no PCDATA is found.
+
+# xsxp::fetchall $pxml_list $path ?$part?
+# Iterates over each PXML in $pxml_list, selecting the indicated
+# path from it, building a new list with the selected data, and
+# returning that new list. For example, $pxml_list might be
+# the %CHILDREN of a particular element, and the $path and $part
+# might select from each child a sub-element in which we're interested.
+
+# xsxp::only $pxml $tagname
+# Iterates over the direct children of $pxml and selects only
+# those with $tagname as their tag. Returns a list of matching
+# elements.
+
+# xsxp::prettyprint $pxml
+# Outputs to stdout a nested-list notation of the parsed XML.
+
+package require xml
+package provide xsxp 1.0
+
+namespace eval xsxp {
+
+ variable Stack
+ variable Cur
+
+ proc Characterdatacommand {characterdata} {
+ variable Cur
+ # puts "characterdatacommand $characterdata"
+ set x [list %PCDATA {} $characterdata]
+ lappend Cur $x
+ }
+
+ proc Elementstartcommand {name attlist args} {
+ # puts "elementstart $name {$attlist} $args"
+ variable Stack
+ variable Cur
+ lappend Stack $Cur
+ set Cur [list $name $attlist]
+ }
+
+ proc Elementendcommand {args} {
+ # puts "elementend $args"
+ variable Stack
+ variable Cur
+ set x [lindex $Stack end]
+ lappend x $Cur
+ set Cur $x
+ set Stack [lrange $Stack 0 end-1]
+ }
+
+ proc parse {xml} {
+ variable Cur
+ variable Stack
+ set Cur {}
+ set Stack {}
+ set parser [::xml::parser \
+ -characterdatacommand [namespace code Characterdatacommand] \
+ -elementstartcommand [namespace code Elementstartcommand] \
+ -elementendcommand [namespace code Elementendcommand] \
+ -ignorewhitespace 1 -final 1
+ ]
+ $parser parse $xml
+ $parser free
+ # The following line is needed because the close of the last element
+ # appends the outermost element to the item on the top of the stack.
+ # Since there's nothing on the top of the stack at the close of the
+ # last element, we append the current element to an empty list.
+ # In essence, since we don't really have a terminating condition
+ # on the recursion, an empty stack is still treated like an element.
+ set Cur [lindex $Cur 0]
+ set Cur [Normalize $Cur]
+ return $Cur
+ }
+
+ proc Normalize {pxml} {
+ # This iterates over pxml recursively, finding entries that
+ # start with multiple %PCDATA elements, and coalesces their
+ # content, so if an element contains only %PCDATA, it is
+ # guaranteed to have only one child.
+ # Not really necessary, given definition of part=%PCDATA
+ # However, it makes pretty-prints nicer (for AWS at least)
+ # and ends up with smaller lists. I have no idea why they
+ # would put quotes around an MD5 hash in hex, tho.
+ set dupl 1
+ while {$dupl} {
+ set first [lindex $pxml 2]
+ set second [lindex $pxml 3]
+ if {[lindex $first 0] eq "%PCDATA" && [lindex $second 0] eq "%PCDATA"} {
+ set repl [list %PCDATA {} [lindex $first 2][lindex $second 2]]
+ set pxml [lreplace $pxml 2 3 $repl]
+ } else {
+ set dupl 0
+ for {set i 2} {$i < [llength $pxml]} {incr i} {
+ set pxml [lreplace $pxml $i $i [Normalize [lindex $pxml $i]]]
+ }
+ }
+ }
+ return $pxml
+ }
+
+ proc prettyprint {pxml {chan stdout} {indent 0}} {
+ puts -nonewline $chan [string repeat " " $indent]
+ if {[lindex $pxml 0] eq "%PCDATA"} {
+ puts $chan "%PCDATA: [lindex $pxml 2]"
+ return
+ }
+ puts -nonewline $chan "[lindex $pxml 0]"
+ foreach {name val} [lindex $pxml 1] {
+ puts -nonewline $chan " $name='$val'"
+ }
+ puts $chan ""
+ foreach node [lrange $pxml 2 end] {
+ prettyprint $node $chan [expr $indent+1]
+ }
+ }
+
+ proc fetch {pxml path {part %ALL}} {
+ set path [string trim $path /]
+ if {-1 != [string first / $path]} {
+ set path [split $path /]
+ }
+ foreach element $path {
+ if {$pxml eq ""} {return ""}
+ foreach {tag count} [split $element #] {
+ if {$tag ne ""} {
+ if {$count eq ""} {set count 0}
+ set pxml [lrange $pxml 2 end]
+ while {0 <= $count && 0 != [llength $pxml]} {
+ if {$tag eq [lindex $pxml 0 0]} {
+ incr count -1
+ if {$count < 0} {
+ # We're done. Go on to next element.
+ set pxml [lindex $pxml 0]
+ } else {
+ # Not done yet. Throw this away.
+ set pxml [lrange $pxml 1 end]
+ }
+ } else {
+ # Not what we want.
+ set pxml [lrange $pxml 1 end]
+ }
+ }
+ } else { # tag eq ""
+ if {$count eq ""} {
+ # Just select whole $pxml
+ } else {
+ set pxml [lindex $pxml [expr {2+$count}]]
+ }
+ }
+ break
+ } ; # done the foreach [split] loop
+ } ; # done all the elements.
+ if {$part eq "%ALL"} {return $pxml}
+ if {$part eq "%ATTRIBUTES"} {return [lindex $pxml 1]}
+ if {$part eq "%TAGNAME"} {return [lindex $pxml 0]}
+ if {$part eq "%CHILDREN"} {return [lrange $pxml 2 end]}
+ if {$part eq "%PCDATA" || $part eq "%PCDATA?"} {
+ set res "" ; set found 0
+ foreach elem [lrange $pxml 2 end] {
+ if {"%PCDATA" eq [lindex $elem 0]} {
+ append res [lindex $elem 2]
+ set found 1
+ }
+ }
+ if {$found || $part eq "%PCDATA?"} {
+ return $res
+ } else {
+ error "xsxp::fetch did not find requested PCDATA"
+ }
+ }
+ return $pxml ; # Don't know what he's after
+ }
+
+ proc only {pxml tag} {
+ set res {}
+ foreach element [lrange $pxml 2 end] {
+ if {[lindex $element 0] eq $tag} {
+ lappend res $element
+ }
+ }
+ return $res
+ }
+
+ proc fetchall {pxml_list path {part %ALL}} {
+ set res [list]
+ foreach pxml $pxml_list {
+ lappend res [fetch $pxml $path $part]
+ }
+ return $res
+ }
+}
+
+namespace export xsxp parse prettyprint fetch
+
diff --git a/tcllib/modules/amazon-s3/xsxp.test b/tcllib/modules/amazon-s3/xsxp.test
new file mode 100644
index 0000000..97efdcb
--- /dev/null
+++ b/tcllib/modules/amazon-s3/xsxp.test
@@ -0,0 +1,166 @@
+# -*- tcl -*-
+# xsxp.test: tests for the xsxp package.
+
+# This file contains a collection of tests for the xsxp
+# package. Sourcing this file into Tcl runs the tests and generates
+# output for errors. No output means no errors were found.
+
+# Copyright (c) 2006,2008 Darren New. All Rights Reserved.
+# Copyright (c) 2008 Andreas Kupries <andreas_kupries@users.sourceforge.net>
+# (Boilerplate stuff (header, footer))
+# All rights reserved.
+#
+# RCS: @(#) $Id: xsxp.test,v 1.3 2008/09/04 02:11:13 andreas_kupries Exp $
+
+# -------------------------------------------------------------------------
+
+source [file join \
+ [file dirname [file dirname [file join [pwd] [info script]]]] \
+ devtools testutilities.tcl]
+
+testsNeedTcl 8.5
+testsNeedTcltest 2.0
+
+if {[catch {package require xml}]} {
+ puts " Aborting the tests found in \"[file tail [info script]]\""
+ puts " Requiring xml package, not found."
+ return
+}
+
+support {
+ # Requires xml (TclXML)
+}
+testing {
+ useLocal xsxp.tcl xsxp
+}
+
+# -------------------------------------------------------------------------
+package require -exact xsxp 1.0
+
+tcltest::configure -verbose {body error pass}
+tcltest::configure -debug 1
+
+set setup_one {
+ set xml {<?xml version="1.0" encoding="UTF-8"?>
+<AccessControlPolicy xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>9fb13c24488e3d7556693247d5a463c1837c3c8ede28f4094228e6c4eb5d70bd</ID><DisplayName>dnew@san.rr.com</DisplayName></Owner><AccessControlList><Grant><Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser"><ID>9fb13c24488e3d7556693247d5a463c1837c3c8ede28f4094228e6c4eb5d70bd</ID><DisplayName>dnew@san.rr.com</DisplayName></Grantee><Permission>FULL_CONTROL</Permission></Grant></AccessControlList></AccessControlPolicy>}
+}
+
+tcltest::test xsxp-1.10 {Basic parsing} -setup $setup_one -body {
+ set pxml [::xsxp::parse $xml]
+ return [lindex $pxml 0]
+} -result {AccessControlPolicy}
+
+tcltest::test xsxp-1.20 {Precision parsing} -setup $setup_one -body {
+ return [::xsxp::parse $xml]
+} -result {AccessControlPolicy {} {Owner {} {ID {} {%PCDATA {} 9fb13c24488e3d7556693247d5a463c1837c3c8ede28f4094228e6c4eb5d70bd}} {DisplayName {} {%PCDATA {} dnew@san.rr.com}}} {AccessControlList {} {Grant {} {Grantee {xsi:type CanonicalUser} {ID {} {%PCDATA {} 9fb13c24488e3d7556693247d5a463c1837c3c8ede28f4094228e6c4eb5d70bd}} {DisplayName {} {%PCDATA {} dnew@san.rr.com}}} {Permission {} {%PCDATA {} FULL_CONTROL}}}}}
+
+tcltest::test xsxp-1.30 {Test pretty printing} -setup $setup_one -body {
+ ::xsxp::prettyprint [::xsxp::parse $xml]
+} -output {AccessControlPolicy
+ Owner
+ ID
+ %PCDATA: 9fb13c24488e3d7556693247d5a463c1837c3c8ede28f4094228e6c4eb5d70bd
+ DisplayName
+ %PCDATA: dnew@san.rr.com
+ AccessControlList
+ Grant
+ Grantee xsi:type='CanonicalUser'
+ ID
+ %PCDATA: 9fb13c24488e3d7556693247d5a463c1837c3c8ede28f4094228e6c4eb5d70bd
+ DisplayName
+ %PCDATA: dnew@san.rr.com
+ Permission
+ %PCDATA: FULL_CONTROL
+}
+
+tcltest::test xsxp-1.40 {Access via path string} -setup $setup_one -body {
+ set pxml [::xsxp::parse $xml]
+ return [::xsxp::fetch $pxml "Owner/DisplayName" %PCDATA]
+} -result {dnew@san.rr.com}
+
+tcltest::test xsxp-1.50 {Access via path list} -setup $setup_one -body {
+ set pxml [::xsxp::parse $xml]
+ return [::xsxp::fetch $pxml "Owner DisplayName" %PCDATA]
+} -result {dnew@san.rr.com}
+
+set setup_two {
+set xml {<?xml version="1.0" encoding="UTF-8"?>
+<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Name>darren</Name><Prefix></Prefix><Marker></Marker><MaxKeys>1000</MaxKeys><IsTruncated>false</IsTruncated><Contents><Key>t1.jpg</Key><LastModified>2006-10-27T23:19:07.000Z</LastModified><ETag>&quot;a251eabc2e69e9716878924b6ec291c7&quot;</ETag><Size>1512545</Size><Owner><ID>9fb13c24488e3d7556693247d5a463c1837c3c8ede28f4094228e6c4eb5d70bd</ID><DisplayName>dnew@san.rr.com</DisplayName></Owner><StorageClass>STANDARD</StorageClass></Contents><Contents><Key>t2.jpg</Key><LastModified>2006-10-27T23:19:44.000Z</LastModified><ETag>&quot;ebc9b242811239ada85f202346353f31&quot;</ETag><Size>1826062</Size><Owner><ID>9fb13c24488e3d7556693247d5a463c1837c3c8ede28f4094228e6c4eb5d70bd</ID><DisplayName>dnew@san.rr.com</DisplayName></Owner><StorageClass>STANDARD</StorageClass></Contents></ListBucketResult>}
+set pxml [::xsxp::parse $xml]
+}
+
+tcltest::test xsxp-2.10 {Fetch top-level item} -setup $setup_two -body {
+ ::xsxp::fetch $pxml MaxKeys
+} -result {MaxKeys {} {%PCDATA {} 1000}}
+
+set c0 {Contents {} {Key {} {%PCDATA {} t1.jpg}} {LastModified {} {%PCDATA {} 2006-10-27T23:19:07.000Z}} {ETag {} {%PCDATA {} {"a251eabc2e69e9716878924b6ec291c7"}}} {Size {} {%PCDATA {} 1512545}} {Owner {} {ID {} {%PCDATA {} 9fb13c24488e3d7556693247d5a463c1837c3c8ede28f4094228e6c4eb5d70bd}} {DisplayName {} {%PCDATA {} dnew@san.rr.com}}} {StorageClass {} {%PCDATA {} STANDARD}}}
+
+tcltest::test xsxp-2.20 {Fetch another top-level item} -setup $setup_two -body {
+ ::xsxp::fetch $pxml Contents
+} -result $c0
+
+tcltest::test xsxp-2.30 {Fetch #0 item} -setup $setup_two -body {
+ ::xsxp::fetch $pxml Contents
+} -result $c0
+
+set c1 {Contents {} {Key {} {%PCDATA {} t2.jpg}} {LastModified {} {%PCDATA {} 2006-10-27T23:19:44.000Z}} {ETag {} {%PCDATA {} {"ebc9b242811239ada85f202346353f31"}}} {Size {} {%PCDATA {} 1826062}} {Owner {} {ID {} {%PCDATA {} 9fb13c24488e3d7556693247d5a463c1837c3c8ede28f4094228e6c4eb5d70bd}} {DisplayName {} {%PCDATA {} dnew@san.rr.com}}} {StorageClass {} {%PCDATA {} STANDARD}}}
+
+tcltest::test xsxp-2.40 {Fetch #1 item} -setup $setup_two -body {
+ ::xsxp::fetch $pxml Contents#1
+} -result $c1
+
+tcltest::test xsxp-2.50 {Fetch item past end} -setup $setup_two -body {
+ ::xsxp::fetch $pxml Contents#2
+} -result {}
+
+tcltest::test xsxp-2.60 {Check %TAGNAME} -setup $setup_two -body {
+ ::xsxp::fetch $pxml #4 %TAGNAME
+} -result {IsTruncated}
+
+tcltest::test xsxp-2.70 {check merge of PCDATA} -setup $setup_two -body {
+ ::xsxp::fetch $pxml Contents/ETag %PCDATA
+} -result {"a251eabc2e69e9716878924b6ec291c7"}
+
+tcltest::test xsxp-2.80 {Check lack of PCDATA} -setup $setup_two -body {
+ ::xsxp::fetch $pxml Prefix %PCDATA
+} -returnCodes 1 -result "xsxp::fetch did not find requested PCDATA"
+
+tcltest::test xsxp-2.90 {Check lack of PCDATA?} -setup $setup_two -body {
+ ::xsxp::fetch $pxml Prefix %PCDATA?
+} -result ""
+
+
+tcltest::test xsxp-3.10 {only} -setup $setup_two -body {
+ set only [::xsxp::only $pxml Contents]
+ return [list [llength $only] [lindex $only 0 0] [lindex $only 1 0]]
+} -result {2 Contents Contents}
+
+tcltest::test xsxp-4.10 {fetchall basic} -setup $setup_two -body {
+ set only [::xsxp::only $pxml Contents]
+ ::xsxp::fetchall $only Key %PCDATA
+} -result {t1.jpg t2.jpg}
+
+tcltest::test xsxp-5.10 {only} -setup $setup_two -body {
+ set only [::xsxp::only $pxml Contents]
+ ::xsxp::fetch $pxml Contents#1/Key/%PCDATA %CHILDREN
+} -result {t2.jpg}
+
+
+
+if {0} {
+ foreach file [glob -directory xml *] {
+ puts $file
+ if {".xml" != [string range $file end-3 end]} continue
+ set in [open $file r]
+ set xml [read $in]
+ close $in
+ set pxml [::xsxp::parse $xml]
+ set out [open [string range $file 0 end-4].txt w] ; #lazy
+ ::xsxp::prettyprint $pxml $out
+ close $out
+ }
+}
+
+#----------------------------------------------------------------------
+testsuiteCleanup
+puts "Done!" ; after 5000