Commit 3858a817 by rfkelly0

some doc spelling fixes

Many more to deal with; `make spelling` for the list
parent 4c18d16d
FS
fs
filesystem
isfile
isdir
metadata
makedir
movedir
listdir
copyfile
copydir
syspath
unicode
TODO
LAFS
contrib
tahoefs
username
dict
Auth
SSL
url
WebDAV
dexml
RemoteFileBuffer
DAVFS
davfs
pyfilesystem
filetimes
TahoeFS
ZipOpenError
FileSystem
zipfs
wrapfs
readonlyfs
UnsupportedError
LimitSizeFS
limitsizefs
args
kwds
LazyFS
lazyfs
unix
WrapFS
wrapfs
HideDotFilesFS
hidedotfilesfs
dirs
iterable
jpg
stdout
OSFS
osfs
tempfs
TempFS
mkdtemp
fs.wrapfs fs.wrapfs
========= =========
The ``fs.wrapfs`` module adds aditional functionality to existing FS objects. The ``fs.wrapfs`` module adds additional functionality to existing FS objects.
.. toctree:: .. toctree::
:maxdepth: 3 :maxdepth: 3
...@@ -10,4 +10,4 @@ The ``fs.wrapfs`` module adds aditional functionality to existing FS objects. ...@@ -10,4 +10,4 @@ The ``fs.wrapfs`` module adds aditional functionality to existing FS objects.
hidedotfiles.rst hidedotfiles.rst
lazyfs.rst lazyfs.rst
limitsize.rst limitsize.rst
readonlyfs.rst readonlyfs.rst
\ No newline at end of file
...@@ -387,7 +387,7 @@ class FS(object): ...@@ -387,7 +387,7 @@ class FS(object):
:type wildcard: string containing a wildcard, or a callable that accepts a path and returns a boolean :type wildcard: string containing a wildcard, or a callable that accepts a path and returns a boolean
:param full: returns full paths (relative to the root) :param full: returns full paths (relative to the root)
:type full: bool :type full: bool
:param absolute: returns absolute paths (paths begining with /) :param absolute: returns absolute paths (paths beginning with /)
:type absolute: bool :type absolute: bool
:param dirs_only: if True, only return directories :param dirs_only: if True, only return directories
:type dirs_only: bool :type dirs_only: bool
......
...@@ -2,8 +2,7 @@ ...@@ -2,8 +2,7 @@
fs.contrib.tahoefs fs.contrib.tahoefs
================== ==================
Example (it will use publicly available, but slow-as-hell Tahoe-LAFS cloud)::
Example (it will use publicly available, but slow-as-hell Tahoe-LAFS cloud):
from fs.contrib.tahoefs import TahoeFS, Connection from fs.contrib.tahoefs import TahoeFS, Connection
dircap = TahoeFS.createdircap(webapi='http://pubgrid.tahoe-lafs.org') dircap = TahoeFS.createdircap(webapi='http://pubgrid.tahoe-lafs.org')
...@@ -15,7 +14,7 @@ Example (it will use publicly available, but slow-as-hell Tahoe-LAFS cloud): ...@@ -15,7 +14,7 @@ Example (it will use publicly available, but slow-as-hell Tahoe-LAFS cloud):
f.close() f.close()
print "Now visit %s and enjoy :-)" % fs.getpathurl('foo.txt') print "Now visit %s and enjoy :-)" % fs.getpathurl('foo.txt')
When any problem occurred, you can turn on internal debugging messages: When any problem occurred, you can turn on internal debugging messages::
import logging import logging
l = logging.getLogger() l = logging.getLogger()
...@@ -25,26 +24,28 @@ When any problem occurred, you can turn on internal debugging messages: ...@@ -25,26 +24,28 @@ When any problem occurred, you can turn on internal debugging messages:
... your Python code using TahoeFS ... ... your Python code using TahoeFS ...
TODO: TODO:
x unicode support
x try network errors / bad happiness * unicode support
x exceptions * try network errors / bad happiness
x tests * exceptions
x sanitize all path types (., /) * tests
x support for extra large file uploads (poster module) * sanitize all path types (., /)
x Possibility to block write until upload done (Tahoe mailing list) * support for extra large file uploads (poster module)
x Report something sane when Tahoe crashed/unavailable * Possibility to block write until upload done (Tahoe mailing list)
x solve failed unit tests (makedir_winner, ...) * Report something sane when Tahoe crashed/unavailable
filetimes * solve failed unit tests (makedir_winner, ...)
docs & author * file times
python3 support * docs & author
remove creating blank files (depends on FileUploadManager) * python3 support
* remove creating blank files (depends on FileUploadManager)
TODO (Not TahoeFS specific tasks): TODO (Not TahoeFS specific tasks):
x RemoteFileBuffer on the fly buffering support * RemoteFileBuffer on the fly buffering support
x RemoteFileBuffer unit tests * RemoteFileBuffer unit tests
x RemoteFileBuffer submit to trunk * RemoteFileBuffer submit to trunk
Implement FileUploadManager + faking isfile/exists of just processing file * Implement FileUploadManager + faking isfile/exists of just processing file
pyfilesystem docs is outdated (rename, movedir, ...) * pyfilesystem docs is outdated (rename, movedir, ...)
''' '''
......
...@@ -278,7 +278,7 @@ class FileLikeBase(object): ...@@ -278,7 +278,7 @@ class FileLikeBase(object):
""" """
# Errors in subclass constructors can cause this to be called without # Errors in subclass constructors can cause this to be called without
# having called FileLikeBase.__init__(). Since we need the attrs it # having called FileLikeBase.__init__(). Since we need the attrs it
# initialises in cleanup, ensure we call it here. # initializes in cleanup, ensure we call it here.
if not hasattr(self,"closed"): if not hasattr(self,"closed"):
FileLikeBase.__init__(self) FileLikeBase.__init__(self)
if not self.closed: if not self.closed:
......
...@@ -345,23 +345,4 @@ class OSFS(OSFSXAttrMixin, OSFSWatchMixin, FS): ...@@ -345,23 +345,4 @@ class OSFS(OSFSXAttrMixin, OSFSWatchMixin, FS):
def getsize(self, path): def getsize(self, path):
return self._stat(path).st_size return self._stat(path).st_size
#@convert_os_errors
#def opendir(self, path):
# """A specialised opendir that returns another OSFS rather than a SubDir
#
# This is more optimal than a SubDir because no path delegation is required.
#
# """
# if path in ('', '/'):
# return self
# path = normpath(path)
# if not self.exists(path):
# raise ResourceNotFoundError(path)
# sub_path = pathjoin(self.root_path, path)
# return OSFS(sub_path,
# thread_synchronize=self.thread_synchronize,
# encoding=self.encoding,
# create=False,
# dir_mode=self.dir_mode)
...@@ -290,7 +290,7 @@ class ConnectionManagerFS(LazyFS): ...@@ -290,7 +290,7 @@ class ConnectionManagerFS(LazyFS):
operating-system integration may be added. operating-system integration may be added.
Since some remote FS classes can raise RemoteConnectionError during Since some remote FS classes can raise RemoteConnectionError during
initialisation, this class makes use of lazy initialization. The initialization, this class makes use of lazy initialization. The
remote FS can be specified as an FS instance, an FS subclass, or a remote FS can be specified as an FS instance, an FS subclass, or a
(class,args) or (class,args,kwds) tuple. For example:: (class,args) or (class,args,kwds) tuple. For example::
......
...@@ -156,7 +156,6 @@ class RPCFS(FS): ...@@ -156,7 +156,6 @@ class RPCFS(FS):
return self.proxy.getmeta(meta_name) return self.proxy.getmeta(meta_name)
else: else:
return self.proxy.getmeta_default(meta_name, default) return self.proxy.getmeta_default(meta_name, default)
def hasmeta(self, meta_name): def hasmeta(self, meta_name):
return self.proxy.hasmeta(meta_name) return self.proxy.hasmeta(meta_name)
......
...@@ -12,6 +12,7 @@ interface for objects stored in Amazon Simple Storage Service (S3). ...@@ -12,6 +12,7 @@ interface for objects stored in Amazon Simple Storage Service (S3).
import os import os
import time import time
import datetime import datetime
import hashlib
import tempfile import tempfile
from fnmatch import fnmatch from fnmatch import fnmatch
import stat as statinfo import stat as statinfo
...@@ -77,7 +78,7 @@ class S3FS(FS): ...@@ -77,7 +78,7 @@ class S3FS(FS):
S3FS objects require the name of the S3 bucket in which to store S3FS objects require the name of the S3 bucket in which to store
files, and can optionally be given a prefix under which the files files, and can optionally be given a prefix under which the files
shoud be stored. The AWS public and private keys may be specified should be stored. The AWS public and private keys may be specified
as additional arguments; if they are not specified they will be as additional arguments; if they are not specified they will be
read from the two environment variables AWS_ACCESS_KEY_ID and read from the two environment variables AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY. AWS_SECRET_ACCESS_KEY.
...@@ -575,10 +576,60 @@ class S3FS(FS): ...@@ -575,10 +576,60 @@ class S3FS(FS):
self.copy(src,dst,overwrite=overwrite) self.copy(src,dst,overwrite=overwrite)
self._s3bukt.delete_key(self._s3path(src)) self._s3bukt.delete_key(self._s3path(src))
def get_total_size(self,path=""): def walkfiles(self,
"""Get total size of all files in this FS.""" path="/",
prefix = self._s3path(path) wildcard=None,
return sum(k.size for k in self._s3bukt.list(prefix=prefix)) dir_wildcard=None,
search="breadth",
ignore_errors=False ):
if search != "breadth" or dir_wildcard is not None:
for item in super(S3FS,self).walkfiles(path,wildcard,dir_wildcard,search,ignore_errors):
yield item
else:
prefix = self._s3path(path)
prefix_len = len(prefix)
for k in self._s3bukt.list(prefix=prefix):
name = k.name[prefix_len:]
if name != "":
if not isinstance(name,unicode):
name = name.decode("utf8")
if not name.endswith(self._separator):
if wildcard is not None:
if callable(wildcard):
if not wildcard(name):
continue
else:
if not fnmatch(name,wildcard):
continue
yield abspath(name)
def walkfilesinfo(self,
path="/",
wildcard=None,
dir_wildcard=None,
search="breadth",
ignore_errors=False ):
if search != "breadth" or dir_wildcard is not None:
for item in super(S3FS,self).walkfiles(path,wildcard,dir_wildcard,search,ignore_errors):
yield (item,self.getinfo(item))
else:
prefix = self._s3path(path)
prefix_len = len(prefix)
for k in self._s3bukt.list(prefix=prefix):
name = k.name[prefix_len:]
if name != "":
if not isinstance(name,unicode):
name = name.decode("utf8")
if not name.endswith(self._separator):
if wildcard is not None:
if callable(wildcard):
if not wildcard(name):
continue
else:
if not fnmatch(name,wildcard):
continue
yield (abspath(name),self._get_key_info(k))
def _eq_utf8(name1,name2): def _eq_utf8(name1,name2):
......
...@@ -315,7 +315,7 @@ class FSTestCases(object): ...@@ -315,7 +315,7 @@ class FSTestCases(object):
found_c = True found_c = True
if "a.txt" in files: if "a.txt" in files:
break break
assert found_c, "depth search order was wrong" assert found_c, "depth search order was wrong: " + str(list(self.fs.walk(search="depth")))
def test_walk_wildcard(self): def test_walk_wildcard(self):
self.fs.setcontents('a.txt', 'hello') self.fs.setcontents('a.txt', 'hello')
......
...@@ -17,7 +17,7 @@ from fs import s3fs ...@@ -17,7 +17,7 @@ from fs import s3fs
class TestS3FS(unittest.TestCase,FSTestCases,ThreadingTestCases): class TestS3FS(unittest.TestCase,FSTestCases,ThreadingTestCases):
# Disable the tests by default # Disable the tests by default
__test__ = False #__test__ = False
bucket = "test-s3fs.rfk.id.au" bucket = "test-s3fs.rfk.id.au"
......
...@@ -414,7 +414,9 @@ def find_duplicates(fs, ...@@ -414,7 +414,9 @@ def find_duplicates(fs,
def print_fs(fs, path='/', max_levels=5, file_out=None, terminal_colors=None, hide_dotfiles=False, dirs_first=False): def print_fs(fs, path='/', max_levels=5, file_out=None, terminal_colors=None, hide_dotfiles=False, dirs_first=False):
"""Prints a filesystem listing to stdout (including sub dirs). Useful as a debugging aid. """Prints a filesystem listing to stdout (including sub directories).
This mostly useful as a debugging aid.
Be careful about printing a OSFS, or any other large filesystem. Be careful about printing a OSFS, or any other large filesystem.
Without max_levels set, this function will traverse the entire directory tree. Without max_levels set, this function will traverse the entire directory tree.
...@@ -432,7 +434,7 @@ def print_fs(fs, path='/', max_levels=5, file_out=None, terminal_colors=None, hi ...@@ -432,7 +434,7 @@ def print_fs(fs, path='/', max_levels=5, file_out=None, terminal_colors=None, hi
:param file_out: File object to write output to (defaults to sys.stdout) :param file_out: File object to write output to (defaults to sys.stdout)
:param terminal_colors: If True, terminal color codes will be written, set to False for non-console output. :param terminal_colors: If True, terminal color codes will be written, set to False for non-console output.
The default (None) will select an appropriate setting for the platform. The default (None) will select an appropriate setting for the platform.
:param hide_dotfiles: if True, files or directories begining with '.' will be removed :param hide_dotfiles: if True, files or directories beginning with '.' will be removed
""" """
......
...@@ -14,7 +14,7 @@ class HideDotFilesFS(WrapFS): ...@@ -14,7 +14,7 @@ class HideDotFilesFS(WrapFS):
"""FS wrapper class that hides dot-files in directory listings. """FS wrapper class that hides dot-files in directory listings.
The listdir() function takes an extra keyword argument 'hidden' The listdir() function takes an extra keyword argument 'hidden'
indicating whether hidden dotfiles shoud be included in the output. indicating whether hidden dot-files shoud be included in the output.
It is False by default. It is False by default.
""" """
......
...@@ -2,10 +2,10 @@ ...@@ -2,10 +2,10 @@
fs.wrapfs.lazyfs fs.wrapfs.lazyfs
================ ================
A class for lazy initialisation of an FS object. A class for lazy initialization of an FS object.
This module provides the class LazyFS, an FS wrapper class that can lazily This module provides the class LazyFS, an FS wrapper class that can lazily
initialise its underlying FS object. initialize its underlying FS object.
""" """
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment