* See all TODO markings in source code
* Take a step back, reorganize. E.g.:
    * stuff in `utils` should be user-useful only, other subroutines ought to go
    elsewhere
    * Should probably move subroutines in `main` to another module so that
    `main` is just that: `main()`. New module would be something like `core` -
    subroutines of the same sort found in `utils` but NOT usually for user
    consumption (and ALSO not generally used multiple times by internals? or
    should this be the same spot as other non-user-facing subroutines?)
        * Also note that a number of main() subroutines right now, such as the
        ones for finding settings, and setting host lists, should be updated so
        that library users can use them!
* Re-examine API of existing operations, e.g. prompt's `validate` option
    * wrt `validate`, figure out which is better: 'dual-mode' single arguments
    like that, or splitting it into 2 different arguments
        * dual-mode makes sense given an either-or situation like that one
        * but at the same time it feels kind of messy
        * see how stdlib does it in similar situations. guessing 2 arguments
        with "only X or Y may be given, but not both at the same time" note?
    * look at the rest too
* Output handling:
    * Put in logging-to-disk and/or leverage the logging module for stdout too?
        * e.g. it already has the concepts of info/warn/etc -- can that be used?
    * Ensure things behave gracefully when used as a lib
        * Temporarily had 'invoked_as_fab' env var set in main(); but that made
          testing a huge pain, so backed it out. Put it back in? or find some
          other method? perhaps inverse -- setting has to be turned on
          explicitly by a library user? or just ensure that e.g. printing,
          exceptions have simple controls for making them shut up?
* Continue building out any previously made tests till coverage.py implies we
have pretty decent coverage (100% would be nice but even a 100% from coverage.py
doesn't literally mean full coverage...)
    * Have started using Nose for testing, probably stick with that;
    * Ditto with Fudge for mocking/stubbing/expecting; may need to contribute
    some patches to it, though, it's not great for functional coding and I
    don't see the need to 100% class-ify my code just so my mocker works better.
* Sphinx documentation
    * Update all docstrings so they "read well" in the generated API:
        * General language / tense / etc
        * Where applicable, turn ``function()`` into `function` (so the default
        role takes effect and makes it a link)
        * If that default role doesn't work right for non-functions, add the
        appropriate :type: prefixes.
    * Go over old static docs to make sure anything applicable is copied over
    and updated.
    * Don't forget the top level text files, e.g. INSTALL
    * Make an FAQ!
    * Brainstorm other common sections we're currently missing (history, etc)
* Run Pylint, fix what it complains about (and generate a config file to make it
shut up about stuff I don't care about)
* Make sure "constants" are consistently named: I've been trending towards not
  using ALL_CAPS, but in some places I probably did use them, so check and
  update (or change my mind and go back; but it just feels ugly when one is
  referencing e.g. ENV in user code all over...)
* Add with_sudo option to put(), which would need to be implemented as a call
to itself (or make an inner function?) followed by a sudo mv call.
* Add Paramiko SSHConfig support: it can provide username and port and other
options. Look at `man ssh_config` for what else people can put in that file.
* Try to make Fabric threadsafe (and/or not using threads for outputting) so a
parallel operation mode can work again.
    * Right now it uses threads and shared state for outputting, and shared
    state for env/connection/etc stuff. This is probably not threadsafe as-is.
    * Even if the shared state works OK, the thread use in outputting is kind
    of wonky. See if it can be redone with coroutines, Twisted, Kamaelia/Axon,
    or etc.
* Make execution model more robust/flexible:
    * Right now, relatively simple, calls each function one by one, and for each
    function, runs on each host one by one, and the host list may be different
    per-function.
    * What may make more sense is to specify host list "first", then for each
    host, you call each function in turn (in which case the order of functions
    may matter more than it does now). This would mean that the logic for host
    discovery changes a decent amount.
    * Do we want to allow this to be switched up dynamically? I.e. allow user
    to specify a "mode" (like in the old Fabric) to determine which of those
    two algorithms is used?
    * How do these decisions affect what decorators/etc can be applied? I.e.
    @hosts doesn't make any sense in the latter scenario because there is only
    one global host list per session. (But isn't that the more sensible
    solution? When would you execute a `fab` command line and expect the host
    list to change during the session?
* Related to previous bullet point, but independent: possibly make concept of
"host" more flexible to include host-specific password, username, arch, OS, etc.
* Possibly add back in the old `shell` functionality (it would have to be a
command line option such as `--run-shell` -- there are no more internal
"commands" and it doesn't make sense as one anyway, as it cannot be run along
with other commands at the same time.)
    * Probably leverage IPython as a library, I've seen it done before
    * Should end up equivalent to a user running `ipython` and then doing a
    bunch of typical Fabric imports, i.e. all the operations plus `env`.
* Check Python 2.6 compatibility.
* Bash tab-completion for Fab tasks, assuming it can be done without being too
laggy.
* Possibly allow "aliases" in `env`, i.e. 'user' and 'username' treated as
being the same key to limit possible confusion.
    * See \_AliasDict, possibly roll it back into \_AttributeDict?
again)
* Better remote-end prompt detection/passthrough to local user
    * See pexpect, it's awesome.
* Add timeout support or ensure existing network level timeouts work well
* Possibly include path.py support (or rather use some of its concepts, since
I think it only works on local paths as-is?)
    * See if its website is back up; would want to apply this sort of stuff to
    contrib.files, most likely.
* Allow persistent environments via context managers, either real (run a bunch
of commands in one command string) or fake (put a prefix with e.g. 'cd foo && '
in front of each shell invocation, behind the scenes)
    * See settings() and cd() for prior art; cd() in particular would probably
    turn into a specific application of this generalized manager, if necessary
    (make sure we're consistent with settings(), i.e. always use general
    manager, or allow for making some special cases to save a bit of typing)
* Strip ANSI colors from remote text, as an option (so I can stop having to do
ls --color=never and so on)
    * Less necessary once I stopped being an idiot and using --color=auto, but
    still useful in cases where user doesn't have that control or, like me,
    wasn't fully aware of how to turn it off correctly.
* Refactor run/sudo:
    * many times I want to run a handful of commands as one or the other, or
    there are call chains (e.g. in contrib and the fabfile) where I have to pass
    `use_sudo` all the way down, and so on.
    * run/sudo are nearly identical except for a handful of lines, so they
    violate DRY.
    * Probably do the usual "behavior controlled via `env`" thing, and make the
    usual context manager too?
* Consider eventually changing backend from Paramiko to Twisted or PuSSH (e.g.
in 2.0):
    * Twisted: theoretically easier coding of the main network loops, possibly
    more SSH functionality
    * PuSSH: possibly more SSH functionality such as ProxyCommand, etc
* Allow find_fabfile/load_fabfile to deal with packages as well as modules.
* Have all operations capable of performing a "dry run" wherein they do not
actually change anything, but simply return the empty string or other
appropriate empty values.
    * Allows users to see what would be run, without doing anything
    * Can be used in conjunction with --debug to see the real commands to run
    * Would be difficult to fully "test" user code relying on output from
    server, without users doing their own mocking.
    * How high/low to do dry running?
        * Just "would do run(foo), then sudo(bar), then run(biz)" and never
        enter the methods?
        * or enter the methods, which allows for displaying transformations
        (i.e. "real" commands being run, like what debug shows; "real" paths
        after tilde expansion and so on, in put/get; etc)
* Allow global override of the `pty` behavior currently implemented as kwargs
for run/sudo. (And then note this in any FAQs mentioning tty problems!)
* Extend globbing to `get`; with `put` we can leverage local Python libs, but
for `get` we will have to use a subroutine or something that can do e.g. `ls -1
foo*` and then nab the list of resulting file paths.
    * Or see if `paramiko.SFTPClient.listdir` is capable of doing globbing,
    though that is unlikely.
* Maybe extend `mode` argument from `put` to `get`, though it seems much less
useful in that direction.
* See whether task dependencies makes sense. So far, have been simply writing
"meta tasks" containing a specific order of sub-task function calls, but being
able to specify a Cap/Rake like "always call X after Y runs" might be useful
for some cases.
    * Primary argument against this is that it introduces additional
    magic/processing to the execution model, where a function call is not just
    a function call but may implicitly call other things as well. This could be
    bad from both the sanity of use perspective as well as just complex to
    implement.
* Allow arbitrary shell commands to be specified at runtime, e.g. `fab -H foo
-- "ifconfig -a"`
    * Should simply be an alias to calling `run()` with the supplied text
    * Specifying task names along with that syntax should probably be
    considered an error condition? or run the arbitrary command after the other
    tasks?

<!-- vim:set filetype=mkd : -->
