Commit e493ada3 by Slater-Victoroff

Removed numpy and scipy files

parent 84149f8f

Too many changes to show.

To preserve performance only 1000 of 1000+ files are displayed.

X.flat returns an indexable 1-D iterator (mostly similar to an array
but always 1-d) --- only has .copy and .__array__ attributes of an array!!!
.typecode() --> .dtype.char
.iscontiguous() --> .flags['CONTIGUOUS'] or .flags.contiguous
.byteswapped() -> .byteswap()
.itemsize() -> .itemsize
.toscalar() -> .item()
If you used typecode characters:
'c' -> 'S1' or 'c'
'b' -> 'B'
'1' -> 'b'
's' -> 'h'
'w' -> 'H'
'u' -> 'I'
C -level
some API calls that used to take PyObject * now take PyArrayObject *
(this should only cause warnings during compile and not actual problems).
PyArray_Take
These commands now return a buffer that must be freed once it is used
using PyMemData_FREE(ptr);
a->descr->zero --> PyArray_Zero(a)
a->descr->one --> PyArray_One(a)
Numeric/arrayobject.h --> numpy/oldnumeric.h
# These will actually work and are defines for PyArray_BYTE,
# but you really should change it in your code
PyArray_CHAR --> PyArray_CHAR
(or PyArray_STRING which is more flexible)
PyArray_SBYTE --> PyArray_BYTE
Any uses of character codes will need adjusting....
use PyArray_XXXLTR where XXX is the name of the type.
If you used function pointers directly (why did you do that?),
the arguments have changed. Everything that was an int is now an intp.
Also, arrayobjects should be passed in at the end.
a->descr->cast[i](fromdata, fromstep, todata, tostep, n)
a->descr->cast[i](fromdata, todata, n, PyArrayObject *in, PyArrayObject *out)
anything but single-stepping is not supported by this function
use the PyArray_CastXXXX functions.
Thank you for your willingness to help make NumPy the best array system
available.
We have a few simple rules:
* try hard to keep the Git repository in a buildable state and to not
indiscriminately muck with what others have contributed.
* Simple changes (including bug fixes) and obvious improvements are
always welcome. Changes that fundamentally change behavior need
discussion on numpy-discussions@scipy.org before anything is
done.
* Please add meaningful comments when you check changes in. These
comments form the basis of the change-log.
* Add unit tests to exercise new code, and regression tests
whenever you fix a bug.
.. -*- rest -*-
.. vim:syntax=rest
.. NB! Keep this document a valid restructured document.
Building and installing NumPy
+++++++++++++++++++++++++++++
:Authors: Numpy Developers <numpy-discussion@scipy.org>
:Discussions to: numpy-discussion@scipy.org
.. Contents::
PREREQUISITES
=============
Building NumPy requires the following software installed:
1) Python__ 2.4.x or newer
On Debian and derivative (Ubuntu): python python-dev
On Windows: the official python installer on Python__ is enough
Make sure that the Python package distutils is installed before
continuing. For example, in Debian GNU/Linux, distutils is included
in the python-dev package.
Python must also be compiled with the zlib module enabled.
2) nose__ (optional) 0.10.3 or later
This is required for testing numpy, but not for using it.
Python__ http://www.python.org
nose__ http://somethingaboutorange.com/mrl/projects/nose/
Fortran ABI mismatch
====================
The two most popular open source fortran compilers are g77 and gfortran.
Unfortunately, they are not ABI compatible, which means that concretely you
should avoid mixing libraries built with one with another. In particular, if
your blas/lapack/atlas is built with g77, you *must* use g77 when building
numpy and scipy; on the contrary, if your atlas is built with gfortran, you
*must* build numpy/scipy with gfortran.
Choosing the fortran compiler
-----------------------------
To build with g77:
python setup.py build --fcompiler=gnu
To build with gfortran:
python setup.py build --fcompiler=gnu95
How to check the ABI of blas/lapack/atlas
-----------------------------------------
One relatively simple and reliable way to check for the compiler used to build
a library is to use ldd on the library. If libg2c.so is a dependency, this
means that g77 has been used. If libgfortran.so is a a dependency, gfortran has
been used. If both are dependencies, this means both have been used, which is
almost always a very bad idea.
Building with ATLAS support
===========================
Ubuntu 8.10 (Intrepid)
----------------------
You can install the necessary packages for optimized ATLAS with this command:
sudo apt-get install libatlas-base-dev
If you have a recent CPU with SIMD suppport (SSE, SSE2, etc...), you should
also install the corresponding package for optimal performances. For example,
for SSE2:
sudo apt-get install libatlas3gf-sse2
*NOTE*: if you build your own atlas, Intrepid changed its default fortran
compiler to gfortran. So you should rebuild everything from scratch, including
lapack, to use it on Intrepid.
Ubuntu 8.04 and lower
---------------------
You can install the necessary packages for optimized ATLAS with this command:
sudo apt-get install atlas3-base-dev
If you have a recent CPU with SIMD suppport (SSE, SSE2, etc...), you should
also install the corresponding package for optimal performances. For example,
for SSE2:
sudo apt-get install atlas3-sse2
Windows 64 bits notes
=====================
Note: only AMD64 is supported (IA64 is not) - AMD64 is the version most people
want.
Free compilers (mingw-w64)
--------------------------
http://mingw-w64.sourceforge.net/
To use the free compilers (mingw-w64), you need to build your own toolchain, as
the mingw project only distribute cross-compilers (cross-compilation is not
supported by numpy). Since this toolchain is still being worked on, serious
compilers bugs can be expected. binutil 2.19 + gcc 4.3.3 + mingw-w64 runtime
gives you a working C compiler (but the C++ is broken). gcc 4.4 will hopefully
be able to run natively.
This is the only tested way to get a numpy with a FULL blas/lapack (scipy does
not work because of C++).
MS compilers
------------
If you are familiar with MS tools, that's obviously the easiest path, and the
compilers are hopefully more mature (although in my experience, they are quite
fragile, and often segfault on invalid C code). The main drawback is that no
fortran compiler + MS compiler combination has been tested - mingw-w64 gfortran
+ MS compiler does not work at all (it is unclear whether it ever will).
For python 2.5, you need VS 2005 (MS compiler version 14) targetting
AMD64 bits, or the Platform SDK v6.0 or below (which gives command
line versions of 64 bits target compilers). The PSDK is free.
For python 2.6, you need VS 2008. The freely available version does not
contains 64 bits compilers (you also need the PSDK, v6.1).
It is *crucial* to use the right version: python 2.5 -> version 14, python 2.6,
version 15. You can check the compiler version with cl.exe /?. Note also that
for python 2.5, 64 bits and 32 bits versions use a different compiler version.
Copyright (c) 2005-2009, NumPy Developers.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
* Neither the name of the NumPy Developers nor the names of any
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Use .add_data_files and .add_data_dir methods in a appropriate
# setup.py files to include non-python files such as documentation,
# data, etc files to distribution. Avoid using MANIFEST.in for that.
#
include MANIFEST.in
include COMPATIBILITY
include *.txt
include setupscons.py
include setupsconsegg.py
include setupegg.py
include site.cfg.example
include tools/py3tool.py
# Adding scons build related files not found by distutils
recursive-include numpy/core/code_generators *.py *.txt
recursive-include numpy/core *.in *.h
recursive-include numpy SConstruct SConscript
# Add documentation: we don't use add_data_dir since we do not want to include
# this at installation, only for sdist-generated tarballs
include doc/Makefile doc/postprocess.py
recursive-include doc/release *
recursive-include doc/source *
recursive-include doc/sphinxext *
recursive-include doc/cython *
recursive-include doc/pyrex *
recursive-include doc/swig *
Metadata-Version: 1.0
Name: numpy
Version: 1.6.2
Summary: NumPy: array processing for numbers, strings, records, and objects.
Home-page: http://numpy.scipy.org
Author: NumPy Developers
Author-email: numpy-discussion@scipy.org
License: BSD
Download-URL: http://sourceforge.net/project/showfiles.php?group_id=1369&package_id=175103
Description: NumPy is a general-purpose array-processing package designed to
efficiently manipulate large multi-dimensional arrays of arbitrary
records without sacrificing too much speed for small multi-dimensional
arrays. NumPy is built on the Numeric code base and adds features
introduced by numarray as well as an extended C-API and the ability to
create arrays of arbitrary type which also makes NumPy suitable for
interfacing with general-purpose data-base applications.
There are also basic facilities for discrete fourier transform,
basic linear algebra and random number generation.
Platform: Windows
Platform: Linux
Platform: Solaris
Platform: Mac OS-X
Platform: Unix
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Science/Research
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved
Classifier: Programming Language :: C
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Topic :: Software Development
Classifier: Topic :: Scientific/Engineering
Classifier: Operating System :: Microsoft :: Windows
Classifier: Operating System :: POSIX
Classifier: Operating System :: Unix
Classifier: Operating System :: MacOS
NumPy is the fundamental package needed for scientific computing with Python.
This package contains:
* a powerful N-dimensional array object
* sophisticated (broadcasting) functions
* tools for integrating C/C++ and Fortran code
* useful linear algebra, Fourier transform, and random number capabilities.
It derives from the old Numeric code base and can be used as a replacement for Numeric. It also adds the features introduced by numarray and can be used to replace numarray.
More information can be found at the website:
http://scipy.org/NumPy
After installation, tests can be run with:
python -c 'import numpy; numpy.test()'
When installing a new version of numpy for the first time or before upgrading
to a newer version, it is recommended to turn on deprecation warnings when
running the tests:
python -Wd -c 'import numpy; numpy.test()'
The most current development version is always available from our
git repository:
http://github.com/numpy/numpy
Travis Oliphant for the NumPy core, the NumPy guide, various
bug-fixes and code contributions.
Paul Dubois, who implemented the original Masked Arrays.
Pearu Peterson for f2py, numpy.distutils and help with code
organization.
Robert Kern for mtrand, bug fixes, help with distutils, code
organization, strided tricks and much more.
Eric Jones for planning and code contributions.
Fernando Perez for code snippets, ideas, bugfixes, and testing.
Ed Schofield for matrix.py patches, bugfixes, testing, and docstrings.
Robert Cimrman for array set operations and numpy.distutils help.
John Hunter for code snippets from matplotlib.
Chris Hanley for help with records.py, testing, and bug fixes.
Travis Vaught for administration, community coordination and
marketing.
Joe Cooper, Jeff Strunk for administration.
Eric Firing for bugfixes.
Arnd Baecker for 64-bit testing.
David Cooke for many code improvements including the auto-generated C-API,
and optimizations.
Andrew Straw for help with the web-page, documentation, packaging and
testing.
Alexander Belopolsky (Sasha) for Masked array bug-fixes and tests,
rank-0 array improvements, scalar math help and other code additions.
Francesc Altet for unicode, work on nested record arrays, and bug-fixes.
Tim Hochberg for getting the build working on MSVC, optimization
improvements, and code review.
Charles (Chuck) Harris for the sorting code originally written for
Numarray and for improvements to polyfit, many bug fixes, delving
into the C code, release management, and documentation.
David Huard for histogram improvements including 2-D and d-D code and
other bug-fixes.
Stefan van der Walt for numerous bug-fixes, testing and documentation.
Albert Strasheim for documentation, bug-fixes, regression tests and
Valgrind expertise.
David Cournapeau for build support, doc-and-bug fixes, and code
contributions including fast_clipping.
Jarrod Millman for release management, community coordination, and code
clean up.
Chris Burns for work on memory mapped arrays and bug-fixes.
Pauli Virtanen for documentation, bug-fixes, lookfor and the
documentation editor.
A.M. Archibald for no-copy-reshape code, strided array tricks,
documentation and bug-fixes.
Pierre Gerard-Marchant for rewriting masked array functionality.
Roberto de Almeida for the buffered array iterator.
Alan McIntyre for updating the NumPy test framework to use nose, improve
the test coverage, and enhancing the test system documentation.
Joe Harrington for administering the 2008 Documentation Sprint.
Mark Wiebe for the new NumPy iterator, the float16 data type, improved
low-level data type operations, and other NumPy core improvements.
NumPy is based on the Numeric (Jim Hugunin, Paul Dubois, Konrad
Hinsen, and David Ascher) and NumArray (Perry Greenfield, J Todd
Miller, Rick White and Paul Barrett) projects. We thank them for
paving the way ahead.
Institutions
------------
Enthought for providing resources and finances for development of NumPy.
UC Berkeley for providing travel money and hosting numerous sprints.
The University of Central Florida for funding the 2008 Documentation Marathon.
The University of Stellenbosch for hosting the buildbot.
from timeit import Timer
class Benchmark(dict):
"""Benchmark a feature in different modules."""
def __init__(self,modules,title='',runs=3,reps=1000):
self.module_test = dict((m,'') for m in modules)
self.runs = runs
self.reps = reps
self.title = title
def __setitem__(self,module,(test_str,setup_str)):
"""Set the test code for modules."""
if module == 'all':
modules = self.module_test.keys()
else:
modules = [module]
for m in modules:
setup_str = 'import %s; import %s as np; ' % (m,m) \
+ setup_str
self.module_test[m] = Timer(test_str, setup_str)
def run(self):
"""Run the benchmark on the different modules."""
module_column_len = max(len(mod) for mod in self.module_test)
if self.title:
print self.title
print 'Doing %d runs, each with %d reps.' % (self.runs,self.reps)
print '-'*79
for mod in sorted(self.module_test):
modname = mod.ljust(module_column_len)
try:
print "%s: %s" % (modname, \
self.module_test[mod].repeat(self.runs,self.reps))
except Exception, e:
print "%s: Failed to benchmark (%s)." % (modname,e)
print '-'*79
print
from benchmark import Benchmark
modules = ['numpy','Numeric','numarray']
b = Benchmark(modules,
title='Casting a (10,10) integer array to float.',
runs=3,reps=10000)
N = [10,10]
b['numpy'] = ('b = a.astype(int)',
'a=numpy.zeros(shape=%s,dtype=float)' % N)
b['Numeric'] = ('b = a.astype("l")',
'a=Numeric.zeros(shape=%s,typecode="d")' % N)
b['numarray'] = ("b = a.astype('l')",
"a=numarray.zeros(shape=%s,typecode='d')" % N)
b.run()
from benchmark import Benchmark
modules = ['numpy','Numeric','numarray']
N = [10,10]
b = Benchmark(modules,
title='Creating %s zeros.' % N,
runs=3,reps=10000)
b['numpy'] = ('a=np.zeros(shape,type)', 'shape=%s;type=float' % N)
b['Numeric'] = ('a=np.zeros(shape,type)', 'shape=%s;type=np.Float' % N)
b['numarray'] = ('a=np.zeros(shape,type)', "shape=%s;type=np.Float" % N)
b.run()
import timeit
# This is to show that NumPy is a poorer choice than nested Python lists
# if you are writing nested for loops.
# This is slower than Numeric was but Numeric was slower than Python lists were
# in the first place.
N = 30
code2 = r"""
for k in xrange(%d):
for l in xrange(%d):
res = a[k,l].item() + a[l,k].item()
""" % (N,N)
code3 = r"""
for k in xrange(%d):
for l in xrange(%d):
res = a[k][l] + a[l][k]
""" % (N,N)
code = r"""
for k in xrange(%d):
for l in xrange(%d):
res = a[k,l] + a[l,k]
""" % (N,N)
setup3 = r"""
import random
a = [[None for k in xrange(%d)] for l in xrange(%d)]
for k in xrange(%d):
for l in xrange(%d):
a[k][l] = random.random()
""" % (N,N,N,N)
numpy_timer1 = timeit.Timer(code, 'import numpy as np; a = np.random.rand(%d,%d)' % (N,N))
numeric_timer = timeit.Timer(code, 'import MLab as np; a=np.rand(%d,%d)' % (N,N))
numarray_timer = timeit.Timer(code, 'import numarray.mlab as np; a=np.rand(%d,%d)' % (N,N))
numpy_timer2 = timeit.Timer(code2, 'import numpy as np; a = np.random.rand(%d,%d)' % (N,N))
python_timer = timeit.Timer(code3, setup3)
numpy_timer3 = timeit.Timer("res = a + a.transpose()","import numpy as np; a=np.random.rand(%d,%d)" % (N,N))
print "shape = ", (N,N)
print "NumPy 1: ", numpy_timer1.repeat(3,100)
print "NumPy 2: ", numpy_timer2.repeat(3,100)
print "Numeric: ", numeric_timer.repeat(3,100)
print "Numarray: ", numarray_timer.repeat(3,100)
print "Python: ", python_timer.repeat(3,100)
print "Optimized: ", numpy_timer3.repeat(3,100)
from benchmark import Benchmark
modules = ['numpy','Numeric','numarray']
b = Benchmark(modules,runs=3,reps=100)
N = 10000
b.title = 'Sorting %d elements' % N
b['numarray'] = ('a=np.array(None,shape=%d,typecode="i");a.sort()'%N,'')
b['numpy'] = ('a=np.empty(shape=%d, dtype="i");a.sort()'%N,'')
b['Numeric'] = ('a=np.empty(shape=%d, typecode="i");np.sort(a)'%N,'')
b.run()
N1,N2 = 100,100
b.title = 'Sorting (%d,%d) elements, last axis' % (N1,N2)
b['numarray'] = ('a=np.array(None,shape=(%d,%d),typecode="i");a.sort()'%(N1,N2),'')
b['numpy'] = ('a=np.empty(shape=(%d,%d), dtype="i");a.sort()'%(N1,N2),'')
b['Numeric'] = ('a=np.empty(shape=(%d,%d),typecode="i");np.sort(a)'%(N1,N2),'')
b.run()
N1,N2 = 100,100
b.title = 'Sorting (%d,%d) elements, first axis' % (N1,N2)
b['numarray'] = ('a=np.array(None,shape=(%d,%d), typecode="i");a.sort(0)'%(N1,N2),'')
b['numpy'] = ('a=np.empty(shape=(%d,%d),dtype="i");np.sort(a,0)'%(N1,N2),'')
b['Numeric'] = ('a=np.empty(shape=(%d,%d),typecode="i");np.sort(a,0)'%(N1,N2),'')
b.run()
# Makefile for Sphinx documentation
#
PYVER =
PYTHON = python$(PYVER)
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = LANG=C sphinx-build
PAPER =
FILES=
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d build/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
.PHONY: help clean html web pickle htmlhelp latex changes linkcheck \
dist dist-build gitwash-update
#------------------------------------------------------------------------------
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " pickle to make pickle files (usable by e.g. sphinx-web)"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " changes to make an overview over all changed/added/deprecated items"
@echo " linkcheck to check all external links for integrity"
@echo " dist PYVER=... to make a distribution-ready tree"
@echo " upload USER=... to upload results to docs.scipy.org"
@echo " gitwash-update GITWASH=path/to/gitwash update gitwash developer docs"
clean:
-rm -rf build/* source/reference/generated
gitwash-update:
rm -rf source/dev/gitwash
install -d source/dev/gitwash
python $(GITWASH)/gitwash_dumper.py source/dev NumPy \
--repo-name=numpy \
--github-user=numpy
cat source/dev/gitwash_links.txt >> source/dev/gitwash/git_links.inc
#------------------------------------------------------------------------------
# Automated generation of all documents
#------------------------------------------------------------------------------
# Build the current numpy version, and extract docs from it.
# We have to be careful of some issues:
#
# - Everything must be done using the same Python version
# - We must use eggs (otherwise they might override PYTHONPATH on import).
# - Different versions of easy_install install to different directories (!)
#
INSTALL_DIR = $(CURDIR)/build/inst-dist/
INSTALL_PPH = $(INSTALL_DIR)/lib/python$(PYVER)/site-packages:$(INSTALL_DIR)/local/lib/python$(PYVER)/site-packages:$(INSTALL_DIR)/lib/python$(PYVER)/dist-packages:$(INSTALL_DIR)/local/lib/python$(PYVER)/dist-packages
DIST_VARS=SPHINXBUILD="LANG=C PYTHONPATH=$(INSTALL_PPH) python$(PYVER) `which sphinx-build`" PYTHON="PYTHONPATH=$(INSTALL_PPH) python$(PYVER)" SPHINXOPTS="$(SPHINXOPTS)"
UPLOAD_TARGET = $(USER)@docs.scipy.org:/home/docserver/www-root/doc/numpy/
upload:
@test -e build/dist || { echo "make dist is required first"; exit 1; }
@test output-is-fine -nt build/dist || { \
echo "Review the output in build/dist, and do 'touch output-is-fine' before uploading."; exit 1; }
rsync -r -z --delete-after -p \
$(if $(shell test -f build/dist/numpy-ref.pdf && echo "y"),, \
--exclude '**-ref.pdf' --exclude '**-user.pdf') \
$(if $(shell test -f build/dist/numpy-chm.zip && echo "y"),, \
--exclude '**-chm.zip') \
build/dist/ $(UPLOAD_TARGET)
dist:
make $(DIST_VARS) real-dist
real-dist: dist-build html
test -d build/latex || make latex
make -C build/latex all-pdf
-test -d build/htmlhelp || make htmlhelp-build
-rm -rf build/dist
cp -r build/html build/dist
perl -pi -e 's#^\s*(<li><a href=".*?">NumPy.*?Manual.*?&raquo;</li>)#<li><a href="/">Numpy and Scipy Documentation</a> &raquo;</li>#;' build/dist/*.html build/dist/*/*.html build/dist/*/*/*.html
cd build/html && zip -9r ../dist/numpy-html.zip .
cp build/latex/numpy-*.pdf build/dist
-zip build/dist/numpy-chm.zip build/htmlhelp/numpy.chm
cd build/dist && tar czf ../dist.tar.gz *
chmod ug=rwX,o=rX -R build/dist
find build/dist -type d -print0 | xargs -0r chmod g+s
dist-build:
rm -f ../dist/*.egg
cd .. && $(PYTHON) setupegg.py bdist_egg
install -d $(subst :, ,$(INSTALL_PPH))
$(PYTHON) `which easy_install` --prefix=$(INSTALL_DIR) ../dist/*.egg
#------------------------------------------------------------------------------
# Basic Sphinx generation rules for different formats
#------------------------------------------------------------------------------
generate: build/generate-stamp
build/generate-stamp: $(wildcard source/reference/*.rst)
mkdir -p build
touch build/generate-stamp
html: generate
mkdir -p build/html build/doctrees
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) build/html $(FILES)
$(PYTHON) postprocess.py html build/html/*.html
@echo
@echo "Build finished. The HTML pages are in build/html."
pickle: generate
mkdir -p build/pickle build/doctrees
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) build/pickle $(FILES)
@echo
@echo "Build finished; now you can process the pickle files or run"
@echo " sphinx-web build/pickle"
@echo "to start the sphinx-web server."
web: pickle
htmlhelp: generate
mkdir -p build/htmlhelp build/doctrees
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) build/htmlhelp $(FILES)
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in build/htmlhelp."
htmlhelp-build: htmlhelp build/htmlhelp/numpy.chm
%.chm: %.hhp
-hhc.exe $^
qthelp: generate
mkdir -p build/qthelp build/doctrees
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) build/qthelp $(FILES)
latex: generate
mkdir -p build/latex build/doctrees
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) build/latex $(FILES)
$(PYTHON) postprocess.py tex build/latex/*.tex
perl -pi -e 's/\t(latex.*|pdflatex) (.*)/\t-$$1 -interaction batchmode $$2/' build/latex/Makefile
@echo
@echo "Build finished; the LaTeX files are in build/latex."
@echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \
"run these through (pdf)latex."
coverage: build
mkdir -p build/coverage build/doctrees
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) build/coverage $(FILES)
@echo "Coverage finished; see c.txt and python.txt in build/coverage"
changes: generate
mkdir -p build/changes build/doctrees
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) build/changes $(FILES)
@echo
@echo "The overview file is in build/changes."
linkcheck: generate
mkdir -p build/linkcheck build/doctrees
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) build/linkcheck $(FILES)
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in build/linkcheck/output.txt."
# Simple makefile to quickly access handy build commands for Cython extension
# code generation. Note that the actual code to produce the extension lives in
# the setup.py file, this Makefile is just meant as a command
# convenience/reminder while doing development.
help:
@echo "Numpy/Cython tasks. Available tasks:"
@echo "ext -> build the Cython extension module."
@echo "html -> create annotated HTML from the .pyx sources"
@echo "test -> run a simple test demo."
@echo "all -> Call ext, html and finally test."
all: ext html test
ext: numpyx.so
test: ext
python run_test.py
html: numpyx.pyx.html
numpyx.so: numpyx.pyx numpyx.c
python setup.py build_ext --inplace
numpyx.pyx.html: numpyx.pyx
cython -a numpyx.pyx
@echo "Annotated HTML of the C code generated in numpyx.html"
# Phony targets for cleanup and similar uses
.PHONY: clean
clean:
rm -rf *~ *.so *.c *.o *.html build
# Suffix rules
%.c : %.pyx
cython $<
==================
NumPy and Cython
==================
This directory contains a small example of how to use NumPy and Cython
together. While much work is planned for the Summer of 2008 as part of the
Google Summer of Code project to improve integration between the two, even
today Cython can be used effectively to write optimized code that accesses
NumPy arrays.
The example provided is just a stub showing how to build an extension and
access the array objects; improvements to this to show more sophisticated tasks
are welcome.
To run it locally, simply type::
make help
which shows you the currently available targets (these are just handy
shorthands for common commands).
\ No newline at end of file
# :Author: Travis Oliphant
# API declaration section. This basically exposes the NumPy C API to
# Pyrex/Cython programs.
cdef extern from "numpy/arrayobject.h":
cdef enum NPY_TYPES:
NPY_BOOL
NPY_BYTE
NPY_UBYTE
NPY_SHORT
NPY_USHORT
NPY_INT
NPY_UINT
NPY_LONG
NPY_ULONG
NPY_LONGLONG
NPY_ULONGLONG
NPY_FLOAT
NPY_DOUBLE
NPY_LONGDOUBLE
NPY_CFLOAT
NPY_CDOUBLE
NPY_CLONGDOUBLE
NPY_OBJECT
NPY_STRING
NPY_UNICODE
NPY_VOID
NPY_NTYPES
NPY_NOTYPE
cdef enum requirements:
NPY_CONTIGUOUS
NPY_FORTRAN
NPY_OWNDATA
NPY_FORCECAST
NPY_ENSURECOPY
NPY_ENSUREARRAY
NPY_ELEMENTSTRIDES
NPY_ALIGNED
NPY_NOTSWAPPED
NPY_WRITEABLE
NPY_UPDATEIFCOPY
NPY_ARR_HAS_DESCR
NPY_BEHAVED
NPY_BEHAVED_NS
NPY_CARRAY
NPY_CARRAY_RO
NPY_FARRAY
NPY_FARRAY_RO
NPY_DEFAULT
NPY_IN_ARRAY
NPY_OUT_ARRAY
NPY_INOUT_ARRAY
NPY_IN_FARRAY
NPY_OUT_FARRAY
NPY_INOUT_FARRAY
NPY_UPDATE_ALL
cdef enum defines:
NPY_MAXDIMS
ctypedef struct npy_cdouble:
double real
double imag
ctypedef struct npy_cfloat:
double real
double imag
ctypedef int npy_intp
ctypedef extern class numpy.dtype [object PyArray_Descr]:
cdef int type_num, elsize, alignment
cdef char type, kind, byteorder
cdef int flags
cdef object fields, typeobj
ctypedef extern class numpy.ndarray [object PyArrayObject]:
cdef char *data
cdef int nd
cdef npy_intp *dimensions
cdef npy_intp *strides
cdef object base
cdef dtype descr
cdef int flags
ctypedef extern class numpy.flatiter [object PyArrayIterObject]:
cdef int nd_m1
cdef npy_intp index, size
cdef ndarray ao
cdef char *dataptr
ctypedef extern class numpy.broadcast [object PyArrayMultiIterObject]:
cdef int numiter
cdef npy_intp size, index
cdef int nd
cdef npy_intp *dimensions
cdef void **iters
object PyArray_ZEROS(int ndims, npy_intp* dims, NPY_TYPES type_num, int fortran)
object PyArray_EMPTY(int ndims, npy_intp* dims, NPY_TYPES type_num, int fortran)
dtype PyArray_DescrFromTypeNum(NPY_TYPES type_num)
object PyArray_SimpleNew(int ndims, npy_intp* dims, NPY_TYPES type_num)
int PyArray_Check(object obj)
object PyArray_ContiguousFromAny(object obj, NPY_TYPES type,
int mindim, int maxdim)
object PyArray_ContiguousFromObject(object obj, NPY_TYPES type,
int mindim, int maxdim)
npy_intp PyArray_SIZE(ndarray arr)
npy_intp PyArray_NBYTES(ndarray arr)
void *PyArray_DATA(ndarray arr)
object PyArray_FromAny(object obj, dtype newtype, int mindim, int maxdim,
int requirements, object context)
object PyArray_FROMANY(object obj, NPY_TYPES type_num, int min,
int max, int requirements)
object PyArray_NewFromDescr(object subtype, dtype newtype, int nd,
npy_intp* dims, npy_intp* strides, void* data,
int flags, object parent)
object PyArray_FROM_OTF(object obj, NPY_TYPES type, int flags)
object PyArray_EnsureArray(object)
object PyArray_MultiIterNew(int n, ...)
char *PyArray_MultiIter_DATA(broadcast multi, int i)
void PyArray_MultiIter_NEXTi(broadcast multi, int i)
void PyArray_MultiIter_NEXT(broadcast multi)
object PyArray_IterNew(object arr)
void PyArray_ITER_NEXT(flatiter it)
void import_array()
# :Author: Robert Kern
# :Copyright: 2004, Enthought, Inc.
# :License: BSD Style
cdef extern from "Python.h":
# Not part of the Python API, but we might as well define it here.
# Note that the exact type doesn't actually matter for Pyrex.
ctypedef int size_t
# Some type declarations we need
ctypedef int Py_intptr_t
# String API
char* PyString_AsString(object string)
char* PyString_AS_STRING(object string)
object PyString_FromString(char* c_string)
object PyString_FromStringAndSize(char* c_string, int length)
object PyString_InternFromString(char *v)
# Float API
object PyFloat_FromDouble(double v)
double PyFloat_AsDouble(object ob)
long PyInt_AsLong(object ob)
# Memory API
void* PyMem_Malloc(size_t n)
void* PyMem_Realloc(void* buf, size_t n)
void PyMem_Free(void* buf)
void Py_DECREF(object obj)
void Py_XDECREF(object obj)
void Py_INCREF(object obj)
void Py_XINCREF(object obj)
# CObject API
ctypedef void (*destructor1)(void* cobj)
ctypedef void (*destructor2)(void* cobj, void* desc)
int PyCObject_Check(object p)
object PyCObject_FromVoidPtr(void* cobj, destructor1 destr)
object PyCObject_FromVoidPtrAndDesc(void* cobj, void* desc,
destructor2 destr)
void* PyCObject_AsVoidPtr(object self)
void* PyCObject_GetDesc(object self)
int PyCObject_SetVoidPtr(object self, void* cobj)
# TypeCheck API
int PyFloat_Check(object obj)
int PyInt_Check(object obj)
# Error API
int PyErr_Occurred()
void PyErr_Clear()
int PyErr_CheckSignals()
cdef extern from "string.h":
void *memcpy(void *s1, void *s2, int n)
cdef extern from "math.h":
double fabs(double x)
# -*- Mode: Python -*- Not really, but close enough
"""Cython access to Numpy arrays - simple example.
"""
#############################################################################
# Load C APIs declared in .pxd files via cimport
#
# A 'cimport' is similar to a Python 'import' statement, but it provides access
# to the C part of a library instead of its Python-visible API. See the
# Pyrex/Cython documentation for details.
cimport c_python as py
cimport c_numpy as cnp
# NOTE: numpy MUST be initialized before any other code is executed.
cnp.import_array()
#############################################################################
# Load Python modules via normal import statements
import numpy as np
#############################################################################
# Regular code section begins
# A 'def' function is visible in the Python-imported module
def print_array_info(cnp.ndarray arr):
"""Simple information printer about an array.
Code meant to illustrate Cython/NumPy integration only."""
cdef int i
print '-='*10
# Note: the double cast here (void * first, then py.Py_intptr_t) is needed
# in Cython but not in Pyrex, since the casting behavior of cython is
# slightly different (and generally safer) than that of Pyrex. In this
# case, we just want the memory address of the actual Array object, so we
# cast it to void before doing the py.Py_intptr_t cast:
print 'Printing array info for ndarray at 0x%0lx'% \
(<py.Py_intptr_t><void *>arr,)
print 'number of dimensions:',arr.nd
print 'address of strides: 0x%0lx'%(<py.Py_intptr_t>arr.strides,)
print 'strides:'
for i from 0<=i<arr.nd:
# print each stride
print ' stride %d:'%i,<py.Py_intptr_t>arr.strides[i]
print 'memory dump:'
print_elements( arr.data, arr.strides, arr.dimensions,
arr.nd, sizeof(double), arr.dtype )
print '-='*10
print
# A 'cdef' function is NOT visible to the python side, but it is accessible to
# the rest of this Cython module
cdef print_elements(char *data,
py.Py_intptr_t* strides,
py.Py_intptr_t* dimensions,
int nd,
int elsize,
object dtype):
cdef py.Py_intptr_t i,j
cdef void* elptr
if dtype not in [np.dtype(np.object_),
np.dtype(np.float64)]:
print ' print_elements() not (yet) implemented for dtype %s'%dtype.name
return
if nd ==0:
if dtype==np.dtype(np.object_):
elptr = (<void**>data)[0] #[0] dereferences pointer in Pyrex
print ' ',<object>elptr
elif dtype==np.dtype(np.float64):
print ' ',(<double*>data)[0]
elif nd == 1:
for i from 0<=i<dimensions[0]:
if dtype==np.dtype(np.object_):
elptr = (<void**>data)[0]
print ' ',<object>elptr
elif dtype==np.dtype(np.float64):
print ' ',(<double*>data)[0]
data = data + strides[0]
else:
for i from 0<=i<dimensions[0]:
print_elements(data, strides+1, dimensions+1, nd-1, elsize, dtype)
data = data + strides[0]
def test_methods(cnp.ndarray arr):
"""Test a few attribute accesses for an array.
This illustrates how the pyrex-visible object is in practice a strange
hybrid of the C PyArrayObject struct and the python object. Some
properties (like .nd) are visible here but not in python, while others
like flags behave very differently: in python flags appears as a separate,
object while here we see the raw int holding the bit pattern.
This makes sense when we think of how pyrex resolves arr.foo: if foo is
listed as a field in the ndarray struct description, it will be directly
accessed as a C variable without going through Python at all. This is why
for arr.flags, we see the actual int which holds all the flags as bit
fields. However, for any other attribute not listed in the struct, it
simply forwards the attribute lookup to python at runtime, just like python
would (which means that AttributeError can be raised for non-existent
attributes, for example)."""
print 'arr.any() :',arr.any()
print 'arr.nd :',arr.nd
print 'arr.flags :',arr.flags
def test():
"""this function is pure Python"""
arr1 = np.array(-1e-30,dtype=np.float64)
arr2 = np.array([1.0,2.0,3.0],dtype=np.float64)
arr3 = np.arange(9,dtype=np.float64)
arr3.shape = 3,3
four = 4
arr4 = np.array(['one','two',3,four],dtype=np.object_)
arr5 = np.array([1,2,3]) # int types not (yet) supported by print_elements
for arr in [arr1,arr2,arr3,arr4,arr5]:
print_array_info(arr)
#!/usr/bin/env python
from numpyx import test
test()
#!/usr/bin/env python
"""Install file for example on how to use Cython with Numpy.
Note: Cython is the successor project to Pyrex. For more information, see
http://cython.org.
"""
from distutils.core import setup
from distutils.extension import Extension
import numpy
# We detect whether Cython is available, so that below, we can eventually ship
# pre-generated C for users to compile the extension without having Cython
# installed on their systems.
try:
from Cython.Distutils import build_ext
has_cython = True
except ImportError:
has_cython = False
# Define a cython-based extension module, using the generated sources if cython
# is not available.
if has_cython:
pyx_sources = ['numpyx.pyx']
cmdclass = {'build_ext': build_ext}
else:
# In production work, you can ship the auto-generated C source yourself to
# your users. In this case, we do NOT ship the .c file as part of numpy,
# so you'll need to actually have cython installed at least the first
# time. Since this is really just an example to show you how to use
# *Cython*, it makes more sense NOT to ship the C sources so you can edit
# the pyx at will with less chances for source update conflicts when you
# update numpy.
pyx_sources = ['numpyx.c']
cmdclass = {}
# Declare the extension object
pyx_ext = Extension('numpyx',
pyx_sources,
include_dirs = [numpy.get_include()])
# Call the routine which does the real work
setup(name = 'numpyx',
description = 'Small example on using Cython to write a Numpy extension',
ext_modules = [pyx_ext],
cmdclass = cmdclass,
)
#!/usr/bin/env python
"""
%prog MODE FILES...
Post-processes HTML and Latex files output by Sphinx.
MODE is either 'html' or 'tex'.
"""
import re, optparse
def main():
p = optparse.OptionParser(__doc__)
options, args = p.parse_args()
if len(args) < 1:
p.error('no mode given')
mode = args.pop(0)
if mode not in ('html', 'tex'):
p.error('unknown mode %s' % mode)
for fn in args:
f = open(fn, 'r')
try:
if mode == 'html':
lines = process_html(fn, f.readlines())
elif mode == 'tex':
lines = process_tex(f.readlines())
finally:
f.close()
f = open(fn, 'w')
f.write("".join(lines))
f.close()
def process_html(fn, lines):
return lines
def process_tex(lines):
"""
Remove unnecessary section titles from the LaTeX file.
"""
new_lines = []
for line in lines:
if (line.startswith(r'\section{numpy.')
or line.startswith(r'\subsection{numpy.')
or line.startswith(r'\subsubsection{numpy.')
or line.startswith(r'\paragraph{numpy.')
or line.startswith(r'\subparagraph{numpy.')
):
pass # skip!
else:
new_lines.append(line)
return new_lines
if __name__ == "__main__":
main()
all:
python setup.py build_ext --inplace
test: all
python run_test.py
.PHONY: clean
clean:
rm -rf *~ *.so *.c *.o build
WARNING: this code is deprecated and slated for removal soon. See the
doc/cython directory for the replacement, which uses Cython (the actively
maintained version of Pyrex).
# :Author: Travis Oliphant
cdef extern from "numpy/arrayobject.h":
cdef enum NPY_TYPES:
NPY_BOOL
NPY_BYTE
NPY_UBYTE
NPY_SHORT
NPY_USHORT
NPY_INT
NPY_UINT
NPY_LONG
NPY_ULONG
NPY_LONGLONG
NPY_ULONGLONG
NPY_FLOAT
NPY_DOUBLE
NPY_LONGDOUBLE
NPY_CFLOAT
NPY_CDOUBLE
NPY_CLONGDOUBLE
NPY_OBJECT
NPY_STRING
NPY_UNICODE
NPY_VOID
NPY_NTYPES
NPY_NOTYPE
cdef enum requirements:
NPY_CONTIGUOUS
NPY_FORTRAN
NPY_OWNDATA
NPY_FORCECAST
NPY_ENSURECOPY
NPY_ENSUREARRAY
NPY_ELEMENTSTRIDES
NPY_ALIGNED
NPY_NOTSWAPPED
NPY_WRITEABLE
NPY_UPDATEIFCOPY
NPY_ARR_HAS_DESCR
NPY_BEHAVED
NPY_BEHAVED_NS
NPY_CARRAY
NPY_CARRAY_RO
NPY_FARRAY
NPY_FARRAY_RO
NPY_DEFAULT
NPY_IN_ARRAY
NPY_OUT_ARRAY
NPY_INOUT_ARRAY
NPY_IN_FARRAY
NPY_OUT_FARRAY
NPY_INOUT_FARRAY
NPY_UPDATE_ALL
cdef enum defines:
# Note: as of Pyrex 0.9.5, enums are type-checked more strictly, so this
# can't be used as an integer.
NPY_MAXDIMS
ctypedef struct npy_cdouble:
double real
double imag
ctypedef struct npy_cfloat:
double real
double imag
ctypedef int npy_intp
ctypedef extern class numpy.dtype [object PyArray_Descr]:
cdef int type_num, elsize, alignment
cdef char type, kind, byteorder
cdef int flags
cdef object fields, typeobj
ctypedef extern class numpy.ndarray [object PyArrayObject]:
cdef char *data
cdef int nd
cdef npy_intp *dimensions
cdef npy_intp *strides
cdef object base
cdef dtype descr
cdef int flags
ctypedef extern class numpy.flatiter [object PyArrayIterObject]:
cdef int nd_m1
cdef npy_intp index, size
cdef ndarray ao
cdef char *dataptr
ctypedef extern class numpy.broadcast [object PyArrayMultiIterObject]:
cdef int numiter
cdef npy_intp size, index
cdef int nd
# These next two should be arrays of [NPY_MAXITER], but that is
# difficult to cleanly specify in Pyrex. Fortunately, it doesn't matter.
cdef npy_intp *dimensions
cdef void **iters
object PyArray_ZEROS(int ndims, npy_intp* dims, NPY_TYPES type_num, int fortran)
object PyArray_EMPTY(int ndims, npy_intp* dims, NPY_TYPES type_num, int fortran)
dtype PyArray_DescrFromTypeNum(NPY_TYPES type_num)
object PyArray_SimpleNew(int ndims, npy_intp* dims, NPY_TYPES type_num)
int PyArray_Check(object obj)
object PyArray_ContiguousFromAny(object obj, NPY_TYPES type,
int mindim, int maxdim)
npy_intp PyArray_SIZE(ndarray arr)
npy_intp PyArray_NBYTES(ndarray arr)
void *PyArray_DATA(ndarray arr)
object PyArray_FromAny(object obj, dtype newtype, int mindim, int maxdim,
int requirements, object context)
object PyArray_FROMANY(object obj, NPY_TYPES type_num, int min,
int max, int requirements)
object PyArray_NewFromDescr(object subtype, dtype newtype, int nd,
npy_intp* dims, npy_intp* strides, void* data,
int flags, object parent)
void PyArray_ITER_NEXT(flatiter it)
void import_array()
# -*- Mode: Python -*- Not really, but close enough
# Expose as much of the Python C API as we need here
cdef extern from "stdlib.h":
ctypedef int size_t
cdef extern from "Python.h":
ctypedef int Py_intptr_t
void* PyMem_Malloc(size_t)
void* PyMem_Realloc(void *p, size_t n)
void PyMem_Free(void *p)
char* PyString_AsString(object string)
object PyString_FromString(char *v)
object PyString_InternFromString(char *v)
int PyErr_CheckSignals()
object PyFloat_FromDouble(double v)
void Py_XINCREF(object o)
void Py_XDECREF(object o)
void Py_CLEAR(object o) # use instead of decref
- cimport with a .pxd file vs 'include foo.pxi'?
- the need to repeat: pyrex does NOT parse C headers.
\ No newline at end of file
# -*- Mode: Python -*- Not really, but close enough
"""WARNING: this code is deprecated and slated for removal soon. See the
doc/cython directory for the replacement, which uses Cython (the actively
maintained version of Pyrex).
"""
cimport c_python
cimport c_numpy
import numpy
# Numpy must be initialized
c_numpy.import_array()
def print_array_info(c_numpy.ndarray arr):
cdef int i
print '-='*10
print 'printing array info for ndarray at 0x%0lx'%(<c_python.Py_intptr_t>arr,)
print 'print number of dimensions:',arr.nd
print 'address of strides: 0x%0lx'%(<c_python.Py_intptr_t>arr.strides,)
print 'strides:'
for i from 0<=i<arr.nd:
# print each stride
print ' stride %d:'%i,<c_python.Py_intptr_t>arr.strides[i]
print 'memory dump:'
print_elements( arr.data, arr.strides, arr.dimensions,
arr.nd, sizeof(double), arr.dtype )
print '-='*10
print
cdef print_elements(char *data,
c_python.Py_intptr_t* strides,
c_python.Py_intptr_t* dimensions,
int nd,
int elsize,
object dtype):
cdef c_python.Py_intptr_t i,j
cdef void* elptr
if dtype not in [numpy.dtype(numpy.object_),
numpy.dtype(numpy.float64)]:
print ' print_elements() not (yet) implemented for dtype %s'%dtype.name
return
if nd ==0:
if dtype==numpy.dtype(numpy.object_):
elptr = (<void**>data)[0] #[0] dereferences pointer in Pyrex
print ' ',<object>elptr
elif dtype==numpy.dtype(numpy.float64):
print ' ',(<double*>data)[0]
elif nd == 1:
for i from 0<=i<dimensions[0]:
if dtype==numpy.dtype(numpy.object_):
elptr = (<void**>data)[0]
print ' ',<object>elptr
elif dtype==numpy.dtype(numpy.float64):
print ' ',(<double*>data)[0]
data = data + strides[0]
else:
for i from 0<=i<dimensions[0]:
print_elements(data, strides+1, dimensions+1, nd-1, elsize, dtype)
data = data + strides[0]
def test_methods(c_numpy.ndarray arr):
"""Test a few attribute accesses for an array.
This illustrates how the pyrex-visible object is in practice a strange
hybrid of the C PyArrayObject struct and the python object. Some
properties (like .nd) are visible here but not in python, while others
like flags behave very differently: in python flags appears as a separate,
object while here we see the raw int holding the bit pattern.
This makes sense when we think of how pyrex resolves arr.foo: if foo is
listed as a field in the c_numpy.ndarray struct description, it will be
directly accessed as a C variable without going through Python at all.
This is why for arr.flags, we see the actual int which holds all the flags
as bit fields. However, for any other attribute not listed in the struct,
it simply forwards the attribute lookup to python at runtime, just like
python would (which means that AttributeError can be raised for
non-existent attributes, for example)."""
print 'arr.any() :',arr.any()
print 'arr.nd :',arr.nd
print 'arr.flags :',arr.flags
def test():
"""this function is pure Python"""
arr1 = numpy.array(-1e-30,dtype=numpy.float64)
arr2 = numpy.array([1.0,2.0,3.0],dtype=numpy.float64)
arr3 = numpy.arange(9,dtype=numpy.float64)
arr3.shape = 3,3
four = 4
arr4 = numpy.array(['one','two',3,four],dtype=numpy.object_)
arr5 = numpy.array([1,2,3]) # int types not (yet) supported by print_elements
for arr in [arr1,arr2,arr3,arr4,arr5]:
print_array_info(arr)
#!/usr/bin/env python
from numpyx import test
test()
#!/usr/bin/env python
"""
WARNING: this code is deprecated and slated for removal soon. See the
doc/cython directory for the replacement, which uses Cython (the actively
maintained version of Pyrex).
Install file for example on how to use Pyrex with Numpy.
For more details, see:
http://www.scipy.org/Cookbook/Pyrex_and_NumPy
http://www.scipy.org/Cookbook/ArrayStruct_and_Pyrex
"""
from distutils.core import setup
from distutils.extension import Extension
# Make this usable by people who don't have pyrex installed (I've committed
# the generated C sources to SVN).
try:
from Pyrex.Distutils import build_ext
has_pyrex = True
except ImportError:
has_pyrex = False
import numpy
# Define a pyrex-based extension module, using the generated sources if pyrex
# is not available.
if has_pyrex:
pyx_sources = ['numpyx.pyx']
cmdclass = {'build_ext': build_ext}
else:
pyx_sources = ['numpyx.c']
cmdclass = {}
pyx_ext = Extension('numpyx',
pyx_sources,
include_dirs = [numpy.get_include()])
# Call the routine which does the real work
setup(name = 'numpyx',
description = 'Small example on using Pyrex to write a Numpy extension',
url = 'http://www.scipy.org/Cookbook/Pyrex_and_NumPy',
ext_modules = [pyx_ext],
cmdclass = cmdclass,
)
=========================
NumPy 1.4.0 Release Notes
=========================
This minor includes numerous bug fixes, as well as a few new features. It
is backward compatible with 1.3.0 release.
Highlights
==========
* New datetime dtype support to deal with dates in arrays
* Faster import time
* Extended array wrapping mechanism for ufuncs
* New Neighborhood iterator (C-level only)
* C99-like complex functions in npymath
New features
============
Extended array wrapping mechanism for ufuncs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
An __array_prepare__ method has been added to ndarray to provide subclasses
greater flexibility to interact with ufuncs and ufunc-like functions. ndarray
already provided __array_wrap__, which allowed subclasses to set the array type
for the result and populate metadata on the way out of the ufunc (as seen in
the implementation of MaskedArray). For some applications it is necessary to
provide checks and populate metadata *on the way in*. __array_prepare__ is
therefore called just after the ufunc has initialized the output array but
before computing the results and populating it. This way, checks can be made
and errors raised before operations which may modify data in place.
Automatic detection of forward incompatibilities
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Previously, if an extension was built against a version N of NumPy, and used on
a system with NumPy M < N, the import_array was successfull, which could cause
crashes because the version M does not have a function in N. Starting from
NumPy 1.4.0, this will cause a failure in import_array, so the error will be
catched early on.
New iterators
~~~~~~~~~~~~~
A new neighborhood iterator has been added to the C API. It can be used to
iterate over the items in a neighborhood of an array, and can handle boundaries
conditions automatically. Zero and one padding are available, as well as
arbitrary constant value, mirror and circular padding.
New polynomial support
~~~~~~~~~~~~~~~~~~~~~~
New modules chebyshev and polynomial have been added. The new polynomial module
is not compatible with the current polynomial support in numpy, but is much
like the new chebyshev module. The most noticeable difference to most will
be that coefficients are specified from low to high power, that the low
level functions do *not* work with the Chebyshev and Polynomial classes as
arguements, and that the Chebyshev and Polynomial classes include a domain.
Mapping between domains is a linear substitution and the two classes can be
converted one to the other, allowing, for instance, a Chebyshev series in
one domain to be expanded as a polynomial in another domain. The new classes
should generally be used instead of the low level functions, the latter are
provided for those who wish to build their own classes.
The new modules are not automatically imported into the numpy namespace,
they must be explicitly brought in with an "import numpy.polynomial"
statement.
New C API
~~~~~~~~~
The following C functions have been added to the C API:
#. PyArray_GetNDArrayCFeatureVersion: return the *API* version of the
loaded numpy.
#. PyArray_Correlate2 - like PyArray_Correlate, but implements the usual
definition of correlation. Inputs are not swapped, and conjugate is
taken for complex arrays.
#. PyArray_NeighborhoodIterNew - a new iterator to iterate over a
neighborhood of a point, with automatic boundaries handling. It is
documented in the iterators section of the C-API reference, and you can
find some examples in the multiarray_test.c.src file in numpy.core.
New ufuncs
~~~~~~~~~~
The following ufuncs have been added to the C API:
#. copysign - return the value of the first argument with the sign copied
from the second argument.
#. nextafter - return the next representable floating point value of the
first argument toward the second argument.
New defines
~~~~~~~~~~~
The alpha processor is now defined and available in numpy/npy_cpu.h. The
failed detection of the PARISC processor has been fixed. The defines are:
#. NPY_CPU_HPPA: PARISC
#. NPY_CPU_ALPHA: Alpha
Testing
~~~~~~~
#. deprecated decorator: this decorator may be used to avoid cluttering
testing output while testing DeprecationWarning is effectively raised by
the decorated test.
#. assert_array_almost_equal_nulps: new method to compare two arrays of
floating point values. With this function, two values are considered
close if there are not many representable floating point values in
between, thus being more robust than assert_array_almost_equal when the
values fluctuate a lot.
#. assert_array_max_ulp: raise an assertion if there are more than N
representable numbers between two floating point values.
#. assert_warns: raise an AssertionError if a callable does not generate a
warning of the appropriate class, without altering the warning state.
Reusing npymath
~~~~~~~~~~~~~~~
In 1.3.0, we started putting portable C math routines in npymath library, so
that people can use those to write portable extensions. Unfortunately, it was
not possible to easily link against this library: in 1.4.0, support has been
added to numpy.distutils so that 3rd party can reuse this library. See coremath
documentation for more information.
Improved set operations
~~~~~~~~~~~~~~~~~~~~~~~
In previous versions of NumPy some set functions (intersect1d,
setxor1d, setdiff1d and setmember1d) could return incorrect results if
the input arrays contained duplicate items. These now work correctly
for input arrays with duplicates. setmember1d has been renamed to
in1d, as with the change to accept arrays with duplicates it is
no longer a set operation, and is conceptually similar to an
elementwise version of the Python operator 'in'. All of these
functions now accept the boolean keyword assume_unique. This is False
by default, but can be set True if the input arrays are known not
to contain duplicates, which can increase the functions' execution
speed.
Improvements
============
#. numpy import is noticeably faster (from 20 to 30 % depending on the
platform and computer)
#. The sort functions now sort nans to the end.
* Real sort order is [R, nan]
* Complex sort order is [R + Rj, R + nanj, nan + Rj, nan + nanj]
Complex numbers with the same nan placements are sorted according to
the non-nan part if it exists.
#. The type comparison functions have been made consistent with the new
sort order of nans. Searchsorted now works with sorted arrays
containing nan values.
#. Complex division has been made more resistent to overflow.
#. Complex floor division has been made more resistent to overflow.
Deprecations
============
The following functions are deprecated:
#. correlate: it takes a new keyword argument old_behavior. When True (the
default), it returns the same result as before. When False, compute the
conventional correlation, and take the conjugate for complex arrays. The
old behavior will be removed in NumPy 1.5, and raises a
DeprecationWarning in 1.4.
#. unique1d: use unique instead. unique1d raises a deprecation
warning in 1.4, and will be removed in 1.5.
#. intersect1d_nu: use intersect1d instead. intersect1d_nu raises
a deprecation warning in 1.4, and will be removed in 1.5.
#. setmember1d: use in1d instead. setmember1d raises a deprecation
warning in 1.4, and will be removed in 1.5.
The following raise errors:
#. When operating on 0-d arrays, ``numpy.max`` and other functions accept
only ``axis=0``, ``axis=-1`` and ``axis=None``. Using an out-of-bounds
axes is an indication of a bug, so Numpy raises an error for these cases
now.
#. Specifying ``axis > MAX_DIMS`` is no longer allowed; Numpy raises now an
error instead of behaving similarly as for ``axis=None``.
Internal changes
================
Use C99 complex functions when available
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The numpy complex types are now guaranteed to be ABI compatible with C99
complex type, if availble on the platform. Moreoever, the complex ufunc now use
the platform C99 functions intead of our own.
split multiarray and umath source code
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The source code of multiarray and umath has been split into separate logic
compilation units. This should make the source code more amenable for
newcomers.
Separate compilation
~~~~~~~~~~~~~~~~~~~~
By default, every file of multiarray (and umath) is merged into one for
compilation as was the case before, but if NPY_SEPARATE_COMPILATION env
variable is set to a non-negative value, experimental individual compilation of
each file is enabled. This makes the compile/debug cycle much faster when
working on core numpy.
Separate core math library
~~~~~~~~~~~~~~~~~~~~~~~~~~
New functions which have been added:
* npy_copysign
* npy_nextafter
* npy_cpack
* npy_creal
* npy_cimag
* npy_cabs
* npy_cexp
* npy_clog
* npy_cpow
* npy_csqr
* npy_ccos
* npy_csin
=========================
NumPy 1.5.0 Release Notes
=========================
Highlights
==========
Python 3 compatibility
----------------------
This is the first NumPy release which is compatible with Python 3. Support for
Python 3 and Python 2 is done from a single code base. Extensive notes on
changes can be found at
`<http://projects.scipy.org/numpy/browser/trunk/doc/Py3K.txt>`_.
Note that the Numpy testing framework relies on nose, which does not have a
Python 3 compatible release yet. A working Python 3 branch of nose can be found
at `<http://bitbucket.org/jpellerin/nose3/>`_ however.
Porting of SciPy to Python 3 is expected to be completed soon.
:pep:`3118` compatibility
-------------------------
The new buffer protocol described by PEP 3118 is fully supported in this
version of Numpy. On Python versions >= 2.6 Numpy arrays expose the buffer
interface, and array(), asarray() and other functions accept new-style buffers
as input.
New features
============
Warning on casting complex to real
----------------------------------
Numpy now emits a `numpy.ComplexWarning` when a complex number is cast
into a real number. For example:
>>> x = np.array([1,2,3])
>>> x[:2] = np.array([1+2j, 1-2j])
ComplexWarning: Casting complex values to real discards the imaginary part
The cast indeed discards the imaginary part, and this may not be the
intended behavior in all cases, hence the warning. This warning can be
turned off in the standard way:
>>> import warnings
>>> warnings.simplefilter("ignore", np.ComplexWarning)
Dot method for ndarrays
-----------------------
Ndarrays now have the dot product also as a method, which allows writing
chains of matrix products as
>>> a.dot(b).dot(c)
instead of the longer alternative
>>> np.dot(a, np.dot(b, c))
linalg.slogdet function
-----------------------
The slogdet function returns the sign and logarithm of the determinant
of a matrix. Because the determinant may involve the product of many
small/large values, the result is often more accurate than that obtained
by simple multiplication.
new header
----------
The new header file ndarraytypes.h contains the symbols from
ndarrayobject.h that do not depend on the PY_ARRAY_UNIQUE_SYMBOL and
NO_IMPORT/_ARRAY macros. Broadly, these symbols are types, typedefs,
and enumerations; the array function calls are left in
ndarrayobject.h. This allows users to include array-related types and
enumerations without needing to concern themselves with the macro
expansions and their side- effects.
Changes
=======
polynomial.polynomial
---------------------
* The polyint and polyder functions now check that the specified number
integrations or derivations is a non-negative integer. The number 0 is
a valid value for both functions.
* A degree method has been added to the Polynomial class.
* A trimdeg method has been added to the Polynomial class. It operates like
truncate except that the argument is the desired degree of the result,
not the number of coefficients.
* Polynomial.fit now uses None as the default domain for the fit. The default
Polynomial domain can be specified by using [] as the domain value.
* Weights can be used in both polyfit and Polynomial.fit
* A linspace method has been added to the Polynomial class to ease plotting.
* The polymulx function was added.
polynomial.chebyshev
--------------------
* The chebint and chebder functions now check that the specified number
integrations or derivations is a non-negative integer. The number 0 is
a valid value for both functions.
* A degree method has been added to the Chebyshev class.
* A trimdeg method has been added to the Chebyshev class. It operates like
truncate except that the argument is the desired degree of the result,
not the number of coefficients.
* Chebyshev.fit now uses None as the default domain for the fit. The default
Chebyshev domain can be specified by using [] as the domain value.
* Weights can be used in both chebfit and Chebyshev.fit
* A linspace method has been added to the Chebyshev class to ease plotting.
* The chebmulx function was added.
* Added functions for the Chebyshev points of the first and second kind.
histogram
---------
After a two years transition period, the old behavior of the histogram function
has been phased out, and the "new" keyword has been removed.
correlate
---------
The old behavior of correlate was deprecated in 1.4.0, the new behavior (the
usual definition for cross-correlation) is now the default.
=========================
NumPy 1.6.0 Release Notes
=========================
This release includes several new features as well as numerous bug fixes and
improved documentation. It is backward compatible with the 1.5.0 release, and
supports Python 2.4 - 2.7 and 3.1 - 3.2.
Highlights
==========
* Re-introduction of datetime dtype support to deal with dates in arrays.
* A new 16-bit floating point type.
* A new iterator, which improves performance of many functions.
New features
============
New 16-bit floating point type
------------------------------
This release adds support for the IEEE 754-2008 binary16 format, available as
the data type ``numpy.half``. Within Python, the type behaves similarly to
`float` or `double`, and C extensions can add support for it with the exposed
half-float API.
New iterator
------------
A new iterator has been added, replacing the functionality of the
existing iterator and multi-iterator with a single object and API.
This iterator works well with general memory layouts different from
C or Fortran contiguous, and handles both standard NumPy and
customized broadcasting. The buffering, automatic data type
conversion, and optional output parameters, offered by
ufuncs but difficult to replicate elsewhere, are now exposed by this
iterator.
Legendre, Laguerre, Hermite, HermiteE polynomials in ``numpy.polynomial``
-------------------------------------------------------------------------
Extend the number of polynomials available in the polynomial package. In
addition, a new ``window`` attribute has been added to the classes in
order to specify the range the ``domain`` maps to. This is mostly useful
for the Laguerre, Hermite, and HermiteE polynomials whose natural domains
are infinite and provides a more intuitive way to get the correct mapping
of values without playing unnatural tricks with the domain.
Fortran assumed shape array and size function support in ``numpy.f2py``
-----------------------------------------------------------------------
F2py now supports wrapping Fortran 90 routines that use assumed shape
arrays. Before such routines could be called from Python but the
corresponding Fortran routines received assumed shape arrays as zero
length arrays which caused unpredicted results. Thanks to Lorenz
Hüdepohl for pointing out the correct way to interface routines with
assumed shape arrays.
In addition, f2py supports now automatic wrapping of Fortran routines
that use two argument ``size`` function in dimension specifications.
Other new functions
-------------------
``numpy.ravel_multi_index`` : Converts a multi-index tuple into
an array of flat indices, applying boundary modes to the indices.
``numpy.einsum`` : Evaluate the Einstein summation convention. Using the
Einstein summation convention, many common multi-dimensional array operations
can be represented in a simple fashion. This function provides a way compute
such summations.
``numpy.count_nonzero`` : Counts the number of non-zero elements in an array.
``numpy.result_type`` and ``numpy.min_scalar_type`` : These functions expose
the underlying type promotion used by the ufuncs and other operations to
determine the types of outputs. These improve upon the ``numpy.common_type``
and ``numpy.mintypecode`` which provide similar functionality but do
not match the ufunc implementation.
Changes
=======
``default error handling``
--------------------------
The default error handling has been change from ``print`` to ``warn`` for
all except for ``underflow``, which remains as ``ignore``.
``numpy.distutils``
-------------------
Several new compilers are supported for building Numpy: the Portland Group
Fortran compiler on OS X, the PathScale compiler suite and the 64-bit Intel C
compiler on Linux.
``numpy.testing``
-----------------
The testing framework gained ``numpy.testing.assert_allclose``, which provides
a more convenient way to compare floating point arrays than
`assert_almost_equal`, `assert_approx_equal` and `assert_array_almost_equal`.
``C API``
---------
In addition to the APIs for the new iterator and half data type, a number
of other additions have been made to the C API. The type promotion
mechanism used by ufuncs is exposed via ``PyArray_PromoteTypes``,
``PyArray_ResultType``, and ``PyArray_MinScalarType``. A new enumeration
``NPY_CASTING`` has been added which controls what types of casts are
permitted. This is used by the new functions ``PyArray_CanCastArrayTo``
and ``PyArray_CanCastTypeTo``. A more flexible way to handle
conversion of arbitrary python objects into arrays is exposed by
``PyArray_GetArrayParamsFromObject``.
Deprecated features
===================
The "normed" keyword in ``numpy.histogram`` is deprecated. Its functionality
will be replaced by the new "density" keyword.
Removed features
================
``numpy.fft``
-------------
The functions `refft`, `refft2`, `refftn`, `irefft`, `irefft2`, `irefftn`,
which were aliases for the same functions without the 'e' in the name, were
removed.
``numpy.memmap``
----------------
The `sync()` and `close()` methods of memmap were removed. Use `flush()` and
"del memmap" instead.
``numpy.lib``
-------------
The deprecated functions ``numpy.unique1d``, ``numpy.setmember1d``,
``numpy.intersect1d_nu`` and ``numpy.lib.ufunclike.log2`` were removed.
``numpy.ma``
------------
Several deprecated items were removed from the ``numpy.ma`` module::
* ``numpy.ma.MaskedArray`` "raw_data" method
* ``numpy.ma.MaskedArray`` constructor "flag" keyword
* ``numpy.ma.make_mask`` "flag" keyword
* ``numpy.ma.allclose`` "fill_value" keyword
``numpy.distutils``
-------------------
The ``numpy.get_numpy_include`` function was removed, use ``numpy.get_include``
instead.
=========================
NumPy 1.6.1 Release Notes
=========================
This is a bugfix only release in the 1.6.x series.
Issues fixed
------------
#1834 einsum fails for specific shapes
#1837 einsum throws nan or freezes python for specific array shapes
#1838 object <-> structured type arrays regression
#1851 regression for SWIG based code in 1.6.0
#1863 Buggy results when operating on array copied with astype()
#1870 Fix corner case of object array assignment
#1843 Py3k: fix error with recarray
#1885 nditer: Error in detecting double reduction loop
#1874 f2py: fix --include_paths bug
#1749 Fix ctypes.load_library()
#1895/1896 iter: writeonly operands weren't always being buffered correctly
=========================
NumPy 1.6.2 Release Notes
=========================
This is a bugfix release in the 1.6.x series. Due to the delay of the NumPy
1.7.0 release, this release contains far more fixes than a regular NumPy bugfix
release. It also includes a number of documentation and build improvements.
``numpy.core`` issues fixed
---------------------------
#2063 make unique() return consistent index
#1138 allow creating arrays from empty buffers or empty slices
#1446 correct note about correspondence vstack and concatenate
#1149 make argmin() work for datetime
#1672 fix allclose() to work for scalar inf
#1747 make np.median() work for 0-D arrays
#1776 make complex division by zero to yield inf properly
#1675 add scalar support for the format() function
#1905 explicitly check for NaNs in allclose()
#1952 allow floating ddof in std() and var()
#1948 fix regression for indexing chararrays with empty list
#2017 fix type hashing
#2046 deleting array attributes causes segfault
#2033 a**2.0 has incorrect type
#2045 make attribute/iterator_element deletions not segfault
#2021 fix segfault in searchsorted()
#2073 fix float16 __array_interface__ bug
``numpy.lib`` issues fixed
--------------------------
#2048 break reference cycle in NpzFile
#1573 savetxt() now handles complex arrays
#1387 allow bincount() to accept empty arrays
#1899 fixed histogramdd() bug with empty inputs
#1793 fix failing npyio test under py3k
#1936 fix extra nesting for subarray dtypes
#1848 make tril/triu return the same dtype as the original array
#1918 use Py_TYPE to access ob_type, so it works also on Py3
``numpy.f2py`` changes
----------------------
ENH: Introduce new options extra_f77_compiler_args and extra_f90_compiler_args
BLD: Improve reporting of fcompiler value
BUG: Fix f2py test_kind.py test
``numpy.poly`` changes
----------------------
ENH: Add some tests for polynomial printing
ENH: Add companion matrix functions
DOC: Rearrange the polynomial documents
BUG: Fix up links to classes
DOC: Add version added to some of the polynomial package modules
DOC: Document xxxfit functions in the polynomial package modules
BUG: The polynomial convenience classes let different types interact
DOC: Document the use of the polynomial convenience classes
DOC: Improve numpy reference documentation of polynomial classes
ENH: Improve the computation of polynomials from roots
STY: Code cleanup in polynomial [*]fromroots functions
DOC: Remove references to cast and NA, which were added in 1.7
``numpy.distutils`` issues fixed
-------------------------------
#1261 change compile flag on AIX from -O5 to -O3
#1377 update HP compiler flags
#1383 provide better support for C++ code on HPUX
#1857 fix build for py3k + pip
BLD: raise a clearer warning in case of building without cleaning up first
BLD: follow build_ext coding convention in build_clib
BLD: fix up detection of Intel CPU on OS X in system_info.py
BLD: add support for the new X11 directory structure on Ubuntu & co.
BLD: add ufsparse to the libraries search path.
BLD: add 'pgfortran' as a valid compiler in the Portland Group
BLD: update version match regexp for IBM AIX Fortran compilers.
``numpy.random`` issues fixed
-----------------------------
BUG: Use npy_intp instead of long in mtrand
=========================
NumPy 2.0.0 Release Notes
=========================
Highlights
==========
New features
============
Changes
=======
.. vim:syntax=rst
Introduction
============
This document proposes some enhancements for numpy and scipy releases.
Successive numpy and scipy releases are too far apart from a time point of
view - some people who are in the numpy release team feel that it cannot
improve without a bit more formal release process. The main proposal is to
follow a time-based release, with expected dates for code freeze, beta and rc.
The goal is two folds: make release more predictable, and move the code forward.
Rationale
=========
Right now, the release process of numpy is relatively organic. When some
features are there, we may decide to make a new release. Because there is not
fixed schedule, people don't really know when new features and bug fixes will
go into a release. More significantly, having an expected release schedule
helps to *coordinate* efforts: at the beginning of a cycle, everybody can jump
in and put new code, even break things if needed. But after some point, only
bug fixes are accepted: this makes beta and RC releases much easier; calming
things down toward the release date helps focusing on bugs and regressions
Proposal
========
Time schedule
-------------
The proposed schedule is to release numpy every 9 weeks - the exact period can
be tweaked if it ends up not working as expected. There will be several stages
for the cycle:
* Development: anything can happen (by anything, we mean as currently
done). The focus is on new features, refactoring, etc...
* Beta: no new features. No bug fixing which requires heavy changes.
regression fixes which appear on supported platforms and were not
caught earlier.
* Polish/RC: only docstring changes and blocker regressions are allowed.
The schedule would be as follows:
+------+-----------------+-----------------+------------------+
| Week | 1.3.0 | 1.4.0 | Release time |
+======+=================+=================+==================+
| 1 | Development | | |
+------+-----------------+-----------------+------------------+
| 2 | Development | | |
+------+-----------------+-----------------+------------------+
| 3 | Development | | |
+------+-----------------+-----------------+------------------+
| 4 | Development | | |
+------+-----------------+-----------------+------------------+
| 5 | Development | | |
+------+-----------------+-----------------+------------------+
| 6 | Development | | |
+------+-----------------+-----------------+------------------+
| 7 | Beta | | |
+------+-----------------+-----------------+------------------+
| 8 | Beta | | |
+------+-----------------+-----------------+------------------+
| 9 | Beta | | 1.3.0 released |
+------+-----------------+-----------------+------------------+
| 10 | Polish | Development | |
+------+-----------------+-----------------+------------------+
| 11 | Polish | Development | |
+------+-----------------+-----------------+------------------+
| 12 | Polish | Development | |
+------+-----------------+-----------------+------------------+
| 13 | Polish | Development | |
+------+-----------------+-----------------+------------------+
| 14 | | Development | |
+------+-----------------+-----------------+------------------+
| 15 | | Development | |
+------+-----------------+-----------------+------------------+
| 16 | | Beta | |
+------+-----------------+-----------------+------------------+
| 17 | | Beta | |
+------+-----------------+-----------------+------------------+
| 18 | | Beta | 1.4.0 released |
+------+-----------------+-----------------+------------------+
Each stage can be defined as follows:
+------------------+-------------+----------------+----------------+
| | Development | Beta | Polish |
+==================+=============+================+================+
| Python Frozen | | slushy | Y |
+------------------+-------------+----------------+----------------+
| Docstring Frozen | | slushy | thicker slush |
+------------------+-------------+----------------+----------------+
| C code Frozen | | thicker slush | thicker slush |
+------------------+-------------+----------------+----------------+
Terminology:
* slushy: you can change it if you beg the release team and it's really
important and you coordinate with docs/translations; no "big"
changes.
* thicker slush: you can change it if it's an open bug marked
showstopper for the Polish release, you beg the release team, the
change is very very small yet very very important, and you feel
extremely guilty about your transgressions.
The different frozen states are intended to be gradients. The exact meaning is
decided by the release manager: he has the last word on what's go in, what
doesn't. The proposed schedule means that there would be at most 12 weeks
between putting code into the source code repository and being released.
Release team
------------
For every release, there would be at least one release manager. We propose to
rotate the release manager: rotation means it is not always the same person
doing the dirty job, and it should also keep the release manager honest.
References
==========
* Proposed schedule for Gnome from Havoc Pennington (one of the core
GTK and Gnome manager):
http://mail.gnome.org/archives/gnome-hackers/2002-June/msg00041.html
The proposed schedule is heavily based on this email
* http://live.gnome.org/ReleasePlanning/Freezes
@import "default.css";
/**
* Spacing fixes
*/
div.body p, div.body dd, div.body li {
line-height: 125%;
}
ul.simple {
margin-top: 0;
margin-bottom: 0;
padding-top: 0;
padding-bottom: 0;
}
/* spacing around blockquoted fields in parameters/attributes/returns */
td.field-body > blockquote {
margin-top: 0.1em;
margin-bottom: 0.5em;
}
/* spacing around example code */
div.highlight > pre {
padding: 2px 5px 2px 5px;
}
/* spacing in see also definition lists */
dl.last > dd {
margin-top: 1px;
margin-bottom: 5px;
margin-left: 30px;
}
/**
* Hide dummy toctrees
*/
ul {
padding-top: 0;
padding-bottom: 0;
margin-top: 0;
margin-bottom: 0;
}
ul li {
padding-top: 0;
padding-bottom: 0;
margin-top: 0;
margin-bottom: 0;
}
ul li a.reference {
padding-top: 0;
padding-bottom: 0;
margin-top: 0;
margin-bottom: 0;
}
/**
* Make high-level subsections easier to distinguish from top-level ones
*/
div.body h3 {
background-color: transparent;
}
div.body h4 {
border: none;
background-color: transparent;
}
/**
* Scipy colors
*/
body {
background-color: rgb(100,135,220);
}
div.document {
background-color: rgb(230,230,230);
}
div.sphinxsidebar {
background-color: rgb(230,230,230);
}
div.related {
background-color: rgb(100,135,220);
}
div.sphinxsidebar h3 {
color: rgb(0,102,204);
}
div.sphinxsidebar h3 a {
color: rgb(0,102,204);
}
div.sphinxsidebar h4 {
color: rgb(0,82,194);
}
div.sphinxsidebar p {
color: black;
}
div.sphinxsidebar a {
color: #355f7c;
}
div.sphinxsidebar ul.want-points {
list-style: disc;
}
.field-list th {
color: rgb(0,102,204);
}
/**
* Extra admonitions
*/
div.tip {
background-color: #ffffe4;
border: 1px solid #ee6;
}
div.plot-output {
clear-after: both;
}
div.plot-output .figure {
float: left;
text-align: center;
margin-bottom: 0;
padding-bottom: 0;
}
div.plot-output .caption {
margin-top: 2;
padding-top: 0;
}
div.plot-output p.admonition-title {
display: none;
}
div.plot-output:after {
content: "";
display: block;
height: 0;
clear: both;
}
/*
div.admonition-example {
background-color: #e4ffe4;
border: 1px solid #ccc;
}*/
/**
* Styling for field lists
*/
table.field-list th {
border-left: 1px solid #aaa !important;
padding-left: 5px;
}
table.field-list {
border-collapse: separate;
border-spacing: 10px;
}
/**
* Styling for footnotes
*/
table.footnote td, table.footnote th {
border: none;
}
{% extends "!autosummary/class.rst" %}
{% block methods %}
{% if methods %}
.. HACK
.. autosummary::
:toctree:
{% for item in methods %}
{{ name }}.{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
{% block attributes %}
{% if attributes %}
.. HACK
.. autosummary::
:toctree:
{% for item in attributes %}
{{ name }}.{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
{% extends "defindex.html" %}
{% block tables %}
<p><strong>Parts of the documentation:</strong></p>
<table class="contentstable" align="center"><tr>
<td width="50%">
<p class="biglink"><a class="biglink" href="{{ pathto("user/index") }}">Numpy User Guide</a><br/>
<span class="linkdescr">start here</span></p>
<p class="biglink"><a class="biglink" href="{{ pathto("reference/index") }}">Numpy Reference</a><br/>
<span class="linkdescr">reference documentation</span></p>
<p class="biglink"><a class="biglink" href="{{ pathto("dev/index") }}">Numpy Developer Guide</a><br/>
<span class="linkdescr">contributing to NumPy</span></p>
</td></tr>
</table>
<p><strong>Indices and tables:</strong></p>
<table class="contentstable" align="center"><tr>
<td width="50%">
<p class="biglink"><a class="biglink" href="{{ pathto("modindex") }}">Module Index</a><br/>
<span class="linkdescr">quick access to all modules</span></p>
<p class="biglink"><a class="biglink" href="{{ pathto("genindex") }}">General Index</a><br/>
<span class="linkdescr">all functions, classes, terms</span></p>
<p class="biglink"><a class="biglink" href="{{ pathto("glossary") }}">Glossary</a><br/>
<span class="linkdescr">the most important terms explained</span></p>
</td><td width="50%">
<p class="biglink"><a class="biglink" href="{{ pathto("search") }}">Search page</a><br/>
<span class="linkdescr">search this documentation</span></p>
<p class="biglink"><a class="biglink" href="{{ pathto("contents") }}">Complete Table of Contents</a><br/>
<span class="linkdescr">lists all sections and subsections</span></p>
</td></tr>
</table>
<p><strong>Meta information:</strong></p>
<table class="contentstable" align="center"><tr>
<td width="50%">
<p class="biglink"><a class="biglink" href="{{ pathto("bugs") }}">Reporting bugs</a></p>
<p class="biglink"><a class="biglink" href="{{ pathto("about") }}">About NumPy</a></p>
</td><td width="50%">
<p class="biglink"><a class="biglink" href="{{ pathto("release") }}">Release Notes</a></p>
<p class="biglink"><a class="biglink" href="{{ pathto("license") }}">License of Numpy</a></p>
</td></tr>
</table>
<h2>Acknowledgements</h2>
<p>
Large parts of this manual originate from Travis E. Oliphant's book
<a href="http://www.tramy.us/">"Guide to Numpy"</a> (which generously entered
Public Domain in August 2008). The reference documentation for many of
the functions are written by numerous contributors and developers of
Numpy, both prior to and during the
<a href="http://scipy.org/Developer_Zone/DocMarathon2008">Numpy Documentation Marathon</a>.
</p>
<p>
The Documentation Marathon is still ongoing. Please help us write
better documentation for Numpy by joining it! Instructions on how to
join and what to do can be found
<a href="http://scipy.org/Developer_Zone/DocMarathon2008">on the scipy.org website</a>.
</p>
{% endblock %}
<h3>Resources</h3>
<ul>
<li><a href="http://scipy.org/">Scipy.org website</a></li>
<li>&nbsp;</li>
</ul>
{% extends "!layout.html" %}
{% block rootrellink %}
<li><a href="{{ pathto('index') }}">{{ shorttitle }}</a>{{ reldelim1 }}</li>
{% endblock %}
{% block sidebarsearch %}
{%- if sourcename %}
<ul class="this-page-menu">
{%- if 'reference/generated' in sourcename %}
<li><a href="/numpy/docs/{{ sourcename.replace('reference/generated/', '').replace('.txt', '') |e }}">{{_('Edit page')}}</a></li>
{%- else %}
<li><a href="/numpy/docs/numpy-docs/{{ sourcename.replace('.txt', '.rst') |e }}">{{_('Edit page')}}</a></li>
{%- endif %}
</ul>
{%- endif %}
{{ super() }}
{% endblock %}
About NumPy
===========
`NumPy <http://www.scipy.org/NumpPy/>`__ is the fundamental package
needed for scientific computing with Python. This package contains:
- a powerful N-dimensional :ref:`array object <arrays>`
- sophisticated :ref:`(broadcasting) functions <ufuncs>`
- basic :ref:`linear algebra functions <routines.linalg>`
- basic :ref:`Fourier transforms <routines.fft>`
- sophisticated :ref:`random number capabilities <routines.random>`
- tools for integrating Fortran code
- tools for integrating C/C++ code
Besides its obvious scientific uses, *NumPy* can also be used as an
efficient multi-dimensional container of generic data. Arbitrary
data types can be defined. This allows *NumPy* to seamlessly and
speedily integrate with a wide variety of databases.
NumPy is a successor for two earlier scientific Python libraries:
NumPy derives from the old *Numeric* code base and can be used
as a replacement for *Numeric*. It also adds the features introduced
by *Numarray* and can also be used to replace *Numarray*.
NumPy community
---------------
Numpy is a distributed, volunteer, open-source project. *You* can help
us make it better; if you believe something should be improved either
in functionality or in documentation, don't hesitate to contact us --- or
even better, contact us and participate in fixing the problem.
Our main means of communication are:
- `scipy.org website <http://scipy.org/>`__
- `Mailing lists <http://scipy.org/Mailing_Lists>`__
- `Numpy Trac <http://projects.scipy.org/numpy>`__ (bug "tickets" go here)
More information about the development of Numpy can be found at
http://scipy.org/Developer_Zone
If you want to fix issues in this documentation, the easiest way
is to participate in `our ongoing documentation marathon
<http://scipy.org/Developer_Zone/DocMarathon2008>`__.
About this documentation
========================
Conventions
-----------
Names of classes, objects, constants, etc. are given in **boldface** font.
Often they are also links to a more detailed documentation of the
referred object.
This manual contains many examples of use, usually prefixed with the
Python prompt ``>>>`` (which is not a part of the example code). The
examples assume that you have first entered::
>>> import numpy as np
before running the examples.
**************
Reporting bugs
**************
File bug reports or feature requests, and make contributions
(e.g. code patches), by submitting a "ticket" on the Trac pages:
- Numpy Trac: http://scipy.org/scipy/numpy
Because of spam abuse, you must create an account on our Trac in order
to submit a ticket, then click on the "New Ticket" tab that only
appears when you have logged in. Please give as much information as
you can in the ticket. It is extremely useful if you can supply a
small self-contained code snippet that reproduces the problem. Also
specify the component, the version you are referring to and the
milestone.
Report bugs to the appropriate Trac instance (there is one for NumPy
and a different one for SciPy). There are also read-only mailing lists
for tracking the status of your bug ticket.
More information can be found on the http://scipy.org/Developer_Zone
website.
# -*- coding: utf-8 -*-
import sys, os, re
# Check Sphinx version
import sphinx
if sphinx.__version__ < "1.0.1":
raise RuntimeError("Sphinx 1.0.1 or newer required")
needs_sphinx = '1.0'
# -----------------------------------------------------------------------------
# General configuration
# -----------------------------------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
sys.path.insert(0, os.path.abspath('../sphinxext'))
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.pngmath', 'numpydoc',
'sphinx.ext.intersphinx', 'sphinx.ext.coverage',
'sphinx.ext.doctest', 'sphinx.ext.autosummary',
'plot_directive']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
#master_doc = 'index'
# General substitutions.
project = 'NumPy'
copyright = '2008-2009, The Scipy community'
# The default replacements for |version| and |release|, also used in various
# other places throughout the built documents.
#
import numpy
# The short X.Y version (including .devXXXX, rcX, b1 suffixes if present)
version = re.sub(r'(\d+\.\d+)\.\d+(.*)', r'\1\2', numpy.__version__)
version = re.sub(r'(\.dev\d+).*?$', r'\1', version)
# The full version, including alpha/beta/rc tags.
release = numpy.__version__
print version, release
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
today_fmt = '%B %d, %Y'
# List of documents that shouldn't be included in the build.
#unused_docs = []
# The reST default role (used for this markup: `text`) to use for all documents.
default_role = "autolink"
# List of directories, relative to source directories, that shouldn't be searched
# for source files.
exclude_dirs = []
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = False
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -----------------------------------------------------------------------------
# HTML output
# -----------------------------------------------------------------------------
# The style sheet to use for HTML and HTML Help pages. A file of that name
# must exist either in Sphinx' static/ path, or in one of the custom paths
# given in html_static_path.
html_style = 'scipy.css'
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
html_title = "%s v%s Manual (DRAFT)" % (project, version)
# The name of an image file (within the static path) to place at the top of
# the sidebar.
html_logo = 'scipyshiny_small.png'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
html_sidebars = {
'index': 'indexsidebar.html'
}
# Additional templates that should be rendered to pages, maps page names to
# template names.
html_additional_pages = {
'index': 'indexcontent.html',
}
# If false, no module index is generated.
html_use_modindex = True
# If true, the reST sources are included in the HTML build as _sources/<name>.
#html_copy_source = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# If nonempty, this is the file name suffix for HTML files (e.g. ".html").
#html_file_suffix = '.html'
# Output file base name for HTML help builder.
htmlhelp_basename = 'numpy'
# Pngmath should try to align formulas properly
pngmath_use_preview = True
# -----------------------------------------------------------------------------
# LaTeX output
# -----------------------------------------------------------------------------
# The paper size ('letter' or 'a4').
#latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
#latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, document class [howto/manual]).
_stdauthor = 'Written by the NumPy community'
latex_documents = [
('reference/index', 'numpy-ref.tex', 'NumPy Reference',
_stdauthor, 'manual'),
('user/index', 'numpy-user.tex', 'NumPy User Guide',
_stdauthor, 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# Additional stuff for the LaTeX preamble.
latex_preamble = r'''
\usepackage{amsmath}
\DeclareUnicodeCharacter{00A0}{\nobreakspace}
% In the parameters section, place a newline after the Parameters
% header
\usepackage{expdlist}
\let\latexdescription=\description
\def\description{\latexdescription{}{} \breaklabel}
% Make Examples/etc section headers smaller and more compact
\makeatletter
\titleformat{\paragraph}{\normalsize\py@HeaderFamily}%
{\py@TitleColor}{0em}{\py@TitleColor}{\py@NormalColor}
\titlespacing*{\paragraph}{0pt}{1ex}{0pt}
\makeatother
% Fix footer/header
\renewcommand{\chaptermark}[1]{\markboth{\MakeUppercase{\thechapter.\ #1}}{}}
\renewcommand{\sectionmark}[1]{\markright{\MakeUppercase{\thesection.\ #1}}}
'''
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
latex_use_modindex = False
# -----------------------------------------------------------------------------
# Intersphinx configuration
# -----------------------------------------------------------------------------
intersphinx_mapping = {'http://docs.python.org/dev': None}
# -----------------------------------------------------------------------------
# Numpy extensions
# -----------------------------------------------------------------------------
# If we want to do a phantom import from an XML file for all autodocs
phantom_import_file = 'dump.xml'
# Make numpydoc to generate plots for example sections
numpydoc_use_plots = True
# -----------------------------------------------------------------------------
# Autosummary
# -----------------------------------------------------------------------------
import glob
autosummary_generate = glob.glob("reference/*.rst")
# -----------------------------------------------------------------------------
# Coverage checker
# -----------------------------------------------------------------------------
coverage_ignore_modules = r"""
""".split()
coverage_ignore_functions = r"""
test($|_) (some|all)true bitwise_not cumproduct pkgload
generic\.
""".split()
coverage_ignore_classes = r"""
""".split()
coverage_c_path = []
coverage_c_regexes = {}
coverage_ignore_c_items = {}
# -----------------------------------------------------------------------------
# Plots
# -----------------------------------------------------------------------------
plot_pre_code = """
import numpy as np
np.random.seed(0)
"""
plot_include_source = True
plot_formats = [('png', 100), 'pdf']
import math
phi = (math.sqrt(5) + 1)/2
import matplotlib
matplotlib.rcParams.update({
'font.size': 8,
'axes.titlesize': 8,
'axes.labelsize': 8,
'xtick.labelsize': 8,
'ytick.labelsize': 8,
'legend.fontsize': 8,
'figure.figsize': (3*phi, 3),
'figure.subplot.bottom': 0.2,
'figure.subplot.left': 0.2,
'figure.subplot.right': 0.9,
'figure.subplot.top': 0.85,
'figure.subplot.wspace': 0.4,
'text.usetex': False,
})
#####################
Numpy manual contents
#####################
.. toctree::
user/index
reference/index
dev/index
release
about
bugs
license
glossary
.. _configure-git:
=================
Git configuration
=================
.. _git-config-basic:
Overview
========
Your personal git_ configurations are saved in the ``.gitconfig`` file in
your home directory.
Here is an example ``.gitconfig`` file::
[user]
name = Your Name
email = you@yourdomain.example.com
[alias]
ci = commit -a
co = checkout
st = status -a
stat = status -a
br = branch
wdiff = diff --color-words
[core]
editor = vim
[merge]
summary = true
You can edit this file directly or you can use the ``git config --global``
command::
git config --global user.name "Your Name"
git config --global user.email you@yourdomain.example.com
git config --global alias.ci "commit -a"
git config --global alias.co checkout
git config --global alias.st "status -a"
git config --global alias.stat "status -a"
git config --global alias.br branch
git config --global alias.wdiff "diff --color-words"
git config --global core.editor vim
git config --global merge.summary true
To set up on another computer, you can copy your ``~/.gitconfig`` file,
or run the commands above.
In detail
=========
user.name and user.email
------------------------
It is good practice to tell git_ who you are, for labeling any changes
you make to the code. The simplest way to do this is from the command
line::
git config --global user.name "Your Name"
git config --global user.email you@yourdomain.example.com
This will write the settings into your git configuration file, which
should now contain a user section with your name and email::
[user]
name = Your Name
email = you@yourdomain.example.com
Of course you'll need to replace ``Your Name`` and ``you@yourdomain.example.com``
with your actual name and email address.
Aliases
-------
You might well benefit from some aliases to common commands.
For example, you might well want to be able to shorten ``git checkout``
to ``git co``. Or you may want to alias ``git diff --color-words``
(which gives a nicely formatted output of the diff) to ``git wdiff``
The following ``git config --global`` commands::
git config --global alias.ci "commit -a"
git config --global alias.co checkout
git config --global alias.st "status -a"
git config --global alias.stat "status -a"
git config --global alias.br branch
git config --global alias.wdiff "diff --color-words"
will create an ``alias`` section in your ``.gitconfig`` file with contents
like this::
[alias]
ci = commit -a
co = checkout
st = status -a
stat = status -a
br = branch
wdiff = diff --color-words
Editor
------
You may also want to make sure that your editor of choice is used ::
git config --global core.editor vim
Merging
-------
To enforce summaries when doing merges (``~/.gitconfig`` file again)::
[merge]
log = true
Or from the command line::
git config --global merge.log true
.. include:: git_links.inc
====================================
Getting started with Git development
====================================
Basic Git setup
###############
* :ref:`install-git`.
* Introduce yourself to Git::
git config --global user.email you@yourdomain.example.com
git config --global user.name "Your Name Comes Here"
.. _forking:
Making your own copy (fork) of NumPy
####################################
You need to do this only once. The instructions here are very similar
to the instructions at http://help.github.com/forking/ - please see that
page for more detail. We're repeating some of it here just to give the
specifics for the NumPy_ project, and to suggest some default names.
Set up and configure a github_ account
======================================
If you don't have a github_ account, go to the github_ page, and make one.
You then need to configure your account to allow write access - see the
``Generating SSH keys`` help on `github help`_.
Create your own forked copy of NumPy_
=========================================
#. Log into your github_ account.
#. Go to the NumPy_ github home at `NumPy github`_.
#. Click on the *fork* button:
.. image:: forking_button.png
Now, after a short pause and some 'Hardcore forking action', you
should find yourself at the home page for your own forked copy of NumPy_.
.. include:: git_links.inc
.. _set-up-fork:
Set up your fork
################
First you follow the instructions for :ref:`forking`.
Overview
========
::
git clone git@github.com:your-user-name/numpy.git
cd numpy
git remote add upstream git://github.com/numpy/numpy.git
In detail
=========
Clone your fork
---------------
#. Clone your fork to the local computer with ``git clone
git@github.com:your-user-name/numpy.git``
#. Investigate. Change directory to your new repo: ``cd numpy``. Then
``git branch -a`` to show you all branches. You'll get something
like::
* master
remotes/origin/master
This tells you that you are currently on the ``master`` branch, and
that you also have a ``remote`` connection to ``origin/master``.
What remote repository is ``remote/origin``? Try ``git remote -v`` to
see the URLs for the remote. They will point to your github_ fork.
Now you want to connect to the upstream `NumPy github`_ repository, so
you can merge in changes from trunk.
.. _linking-to-upstream:
Linking your repository to the upstream repo
--------------------------------------------
::
cd numpy
git remote add upstream git://github.com/numpy/numpy.git
``upstream`` here is just the arbitrary name we're using to refer to the
main NumPy_ repository at `NumPy github`_.
Note that we've used ``git://`` for the URL rather than ``git@``. The
``git://`` URL is read only. This means we that we can't accidentally
(or deliberately) write to the upstream repo, and we are only going to
use it to merge into our own code.
Just for your own satisfaction, show yourself that you now have a new
'remote', with ``git remote -v show``, giving you something like::
upstream git://github.com/numpy/numpy.git (fetch)
upstream git://github.com/numpy/numpy.git (push)
origin git@github.com:your-user-name/numpy.git (fetch)
origin git@github.com:your-user-name/numpy.git (push)
.. include:: git_links.inc
.. _dot2-dot3:
========================================
Two and three dots in difference specs
========================================
Thanks to Yarik Halchenko for this explanation.
Imagine a series of commits A, B, C, D... Imagine that there are two
branches, *topic* and *master*. You branched *topic* off *master* when
*master* was at commit 'E'. The graph of the commits looks like this::
A---B---C topic
/
D---E---F---G master
Then::
git diff master..topic
will output the difference from G to C (i.e. with effects of F and G),
while::
git diff master...topic
would output just differences in the topic branch (i.e. only A, B, and
C).
.. _following-latest:
=============================
Following the latest source
=============================
These are the instructions if you just want to follow the latest
*NumPy* source, but you don't need to do any development for now.
The steps are:
* :ref:`install-git`
* get local copy of the git repository from github_
* update local copy from time to time
Get the local copy of the code
==============================
From the command line::
git clone git://github.com/numpy/numpy.git
You now have a copy of the code tree in the new ``numpy`` directory.
Updating the code
=================
From time to time you may want to pull down the latest code. Do this with::
cd numpy
git fetch
git merge --ff-only
The tree in ``numpy`` will now have the latest changes from the initial
repository.
.. include:: git_links.inc
.. _git-development:
=====================
Git for development
=====================
Contents:
.. toctree::
:maxdepth: 2
development_setup
configure_git
development_workflow
============
Introduction
============
These pages describe a git_ and github_ workflow for the NumPy_
project.
There are several different workflows here, for different ways of
working with *NumPy*.
This is not a comprehensive git_ reference, it's just a workflow for our
own project. It's tailored to the github_ hosting service. You may well
find better or quicker ways of getting stuff done with git_, but these
should get you started.
For general resources for learning git_ see :ref:`git-resources`.
.. _install-git:
Install git
===========
Overview
--------
================ =============
Debian / Ubuntu ``sudo apt-get install git-core``
Fedora ``sudo yum install git-core``
Windows Download and install msysGit_
OS X Use the git-osx-installer_
================ =============
In detail
---------
See the git_ page for the most recent information.
Have a look at the github_ install help pages available from `github help`_
There are good instructions here: http://book.git-scm.com/2_installing_git.html
.. include:: git_links.inc
.. This (-*- rst -*-) format file contains commonly used link targets
and name substitutions. It may be included in many files,
therefore it should only contain link targets and name
substitutions. Try grepping for "^\.\. _" to find plausible
candidates for this list.
.. NOTE: reST targets are
__not_case_sensitive__, so only one target definition is needed for
nipy, NIPY, Nipy, etc...
.. PROJECTNAME placeholders
.. _PROJECTNAME: http://neuroimaging.scipy.org
.. _`PROJECTNAME github`: http://github.com/nipy
.. _`PROJECTNAME mailing list`: http://projects.scipy.org/mailman/listinfo/nipy-devel
.. nipy
.. _nipy: http://nipy.org/nipy
.. _`nipy github`: http://github.com/nipy/nipy
.. _`nipy mailing list`: http://mail.scipy.org/mailman/listinfo/nipy-devel
.. ipython
.. _ipython: http://ipython.scipy.org
.. _`ipython github`: http://github.com/ipython/ipython
.. _`ipython mailing list`: http://mail.scipy.org/mailman/listinfo/IPython-dev
.. dipy
.. _dipy: http://nipy.org/dipy
.. _`dipy github`: http://github.com/Garyfallidis/dipy
.. _`dipy mailing list`: http://mail.scipy.org/mailman/listinfo/nipy-devel
.. nibabel
.. _nibabel: http://nipy.org/nibabel
.. _`nibabel github`: http://github.com/nipy/nibabel
.. _`nibabel mailing list`: http://mail.scipy.org/mailman/listinfo/nipy-devel
.. marsbar
.. _marsbar: http://marsbar.sourceforge.net
.. _`marsbar github`: http://github.com/matthew-brett/marsbar
.. _`MarsBaR mailing list`: https://lists.sourceforge.net/lists/listinfo/marsbar-users
.. git stuff
.. _git: http://git-scm.com/
.. _github: http://github.com
.. _github help: http://help.github.com
.. _msysgit: http://code.google.com/p/msysgit/downloads/list
.. _git-osx-installer: http://code.google.com/p/git-osx-installer/downloads/list
.. _subversion: http://subversion.tigris.org/
.. _git cheat sheet: http://github.com/guides/git-cheat-sheet
.. _pro git book: http://progit.org/
.. _git svn crash course: http://git-scm.com/course/svn.html
.. _learn.github: http://learn.github.com/
.. _network graph visualizer: http://github.com/blog/39-say-hello-to-the-network-graph-visualizer
.. _git user manual: http://www.kernel.org/pub/software/scm/git/docs/user-manual.html
.. _git tutorial: http://www.kernel.org/pub/software/scm/git/docs/gittutorial.html
.. _git community book: http://book.git-scm.com/
.. _git ready: http://www.gitready.com/
.. _git casts: http://www.gitcasts.com/
.. _Fernando's git page: http://www.fperez.org/py4science/git.html
.. _git magic: http://www-cs-students.stanford.edu/~blynn/gitmagic/index.html
.. _git concepts: http://www.eecs.harvard.edu/~cduan/technical/git/
.. _git clone: http://www.kernel.org/pub/software/scm/git/docs/git-clone.html
.. _git checkout: http://www.kernel.org/pub/software/scm/git/docs/git-checkout.html
.. _git commit: http://www.kernel.org/pub/software/scm/git/docs/git-commit.html
.. _git push: http://www.kernel.org/pub/software/scm/git/docs/git-push.html
.. _git pull: http://www.kernel.org/pub/software/scm/git/docs/git-pull.html
.. _git add: http://www.kernel.org/pub/software/scm/git/docs/git-add.html
.. _git status: http://www.kernel.org/pub/software/scm/git/docs/git-status.html
.. _git diff: http://www.kernel.org/pub/software/scm/git/docs/git-diff.html
.. _git log: http://www.kernel.org/pub/software/scm/git/docs/git-log.html
.. _git branch: http://www.kernel.org/pub/software/scm/git/docs/git-branch.html
.. _git remote: http://www.kernel.org/pub/software/scm/git/docs/git-remote.html
.. _git config: http://www.kernel.org/pub/software/scm/git/docs/git-config.html
.. _why the -a flag?: http://www.gitready.com/beginner/2009/01/18/the-staging-area.html
.. _git staging area: http://www.gitready.com/beginner/2009/01/18/the-staging-area.html
.. _tangled working copy problem: http://tomayko.com/writings/the-thing-about-git
.. _git management: http://kerneltrap.org/Linux/Git_Management
.. _linux git workflow: http://www.mail-archive.com/dri-devel@lists.sourceforge.net/msg39091.html
.. _git parable: http://tom.preston-werner.com/2009/05/19/the-git-parable.html
.. _git foundation: http://matthew-brett.github.com/pydagogue/foundation.html
.. other stuff
.. _python: http://www.python.org
.. _NumPy: http://numpy.scipy.org
.. _`NumPy github`: http://github.com/numpy/numpy
.. _`NumPy mailing list`: http://scipy.org/Mailing_Lists
.. _git-resources:
================
git_ resources
================
Tutorials and summaries
=======================
* `github help`_ has an excellent series of how-to guides.
* `learn.github`_ has an excellent series of tutorials
* The `pro git book`_ is a good in-depth book on git.
* A `git cheat sheet`_ is a page giving summaries of common commands.
* The `git user manual`_
* The `git tutorial`_
* The `git community book`_
* `git ready`_ - a nice series of tutorials
* `git casts`_ - video snippets giving git how-tos.
* `git magic`_ - extended introduction with intermediate detail
* The `git parable`_ is an easy read explaining the concepts behind git.
* Our own `git foundation`_ expands on the `git parable`_.
* Fernando Perez' git page - `Fernando's git page`_ - many links and tips
* A good but technical page on `git concepts`_
* `git svn crash course`_: git_ for those of us used to subversion_
Advanced git workflow
=====================
There are many ways of working with git_; here are some posts on the
rules of thumb that other projects have come up with:
* Linus Torvalds on `git management`_
* Linus Torvalds on `linux git workflow`_ . Summary; use the git tools
to make the history of your edits as clean as possible; merge from
upstream edits as little as possible in branches where you are doing
active development.
Manual pages online
===================
You can get these on your own machine with (e.g) ``git help push`` or
(same thing) ``git push --help``, but, for convenience, here are the
online manual pages for some common commands:
* `git add`_
* `git branch`_
* `git checkout`_
* `git clone`_
* `git commit`_
* `git config`_
* `git diff`_
* `git log`_
* `git pull`_
* `git push`_
* `git remote`_
* `git status`_
.. include:: git_links.inc
.. _using-git:
Working with *NumPy* source code
======================================
Contents:
.. toctree::
:maxdepth: 2
git_intro
following_latest
patching
git_development
git_resources
================
Making a patch
================
You've discovered a bug or something else you want to change in
NumPy_ - excellent!
You've worked out a way to fix it - even better!
You want to tell us about it - best of all!
The easiest way is to make a *patch* or set of patches. Here we explain
how. Making a patch is the simplest and quickest, but if you're going
to be doing anything more than simple quick things, please consider
following the :ref:`git-development` model instead.
.. _making-patches:
Making patches
==============
Overview
--------
::
# tell git who you are
git config --global user.email you@yourdomain.example.com
git config --global user.name "Your Name Comes Here"
# get the repository if you don't have it
git clone git://github.com/numpy/numpy.git
# make a branch for your patching
cd numpy
git branch the-fix-im-thinking-of
git checkout the-fix-im-thinking-of
# hack, hack, hack
# Tell git about any new files you've made
git add somewhere/tests/test_my_bug.py
# commit work in progress as you go
git commit -am 'BF - added tests for Funny bug'
# hack hack, hack
git commit -am 'BF - added fix for Funny bug'
# make the patch files
git format-patch -M -C master
Then, create a ticket in the `Numpy Trac <http://projects.scipy.org/numpy/>`__,
attach the generated patch files there, and notify the `NumPy mailing list`_
about your contribution.
In detail
---------
#. Tell git_ who you are so it can label the commits you've made::
git config --global user.email you@yourdomain.example.com
git config --global user.name "Your Name Comes Here"
#. If you don't already have one, clone a copy of the NumPy_ repository::
git clone git://github.com/numpy/numpy.git
cd numpy
#. Make a 'feature branch'. This will be where you work on your bug
fix. It's nice and safe and leaves you with access to an unmodified
copy of the code in the main branch::
git branch the-fix-im-thinking-of
git checkout the-fix-im-thinking-of
#. Do some edits, and commit them as you go::
# hack, hack, hack
# Tell git about any new files you've made
git add somewhere/tests/test_my_bug.py
# commit work in progress as you go
git commit -am 'BF - added tests for Funny bug'
# hack hack, hack
git commit -am 'BF - added fix for Funny bug'
Note the ``-am`` options to ``commit``. The ``m`` flag just signals
that you're going to type a message on the command line. The ``a``
flag - you can just take on faith - or see `why the -a flag?`_.
#. When you have finished, check you have committed all your changes::
git status
#. Finally, make your commits into patches. You want all the commits
since you branched from the ``master`` branch::
git format-patch -M -C master
You will now have several files named for the commits::
0001-BF-added-tests-for-Funny-bug.patch
0002-BF-added-fix-for-Funny-bug.patch
The recommended way to proceed is either to attach these files to
an enhancement ticket in the `Numpy Trac <http://projects.scipy.org/numpy/>`__
and send a mail about the enhancement to the `NumPy mailing list`_.
You can also consider sending your changes via Github, see below and in
:ref:`asking-for-merging`.
When you are done, to switch back to the main copy of the code, just
return to the ``master`` branch::
git checkout master
Moving from patching to development
===================================
If you find you have done some patches, and you have one or more feature
branches, you will probably want to switch to development mode. You can
do this with the repository you have.
Fork the NumPy_ repository on github_ - :ref:`forking`. Then::
# checkout and refresh master branch from main repo
git checkout master
git fetch origin
git merge --ff-only origin/master
# rename pointer to main repository to 'upstream'
git remote rename origin upstream
# point your repo to default read / write to your fork on github
git remote add origin git@github.com:your-user-name/numpy.git
# push up any branches you've made and want to keep
git push origin the-fix-im-thinking-of
Then you can, if you want, follow the :ref:`development-workflow`.
.. include:: git_links.inc
.. _NumPy: http://numpy.scipy.org
.. _`NumPy github`: http://github.com/numpy/numpy
.. _`NumPy mailing list`: http://scipy.org/Mailing_Lists
#####################
Contributing to Numpy
#####################
.. toctree::
:maxdepth: 3
gitwash/index
For core developers: see :ref:`development-workflow`.
********
Glossary
********
.. toctree::
.. glossary::
.. automodule:: numpy.doc.glossary
Jargon
------
.. automodule:: numpy.doc.jargon
*************
Numpy License
*************
Copyright (c) 2005, NumPy Developers
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
* Neither the name of the NumPy Developers nor the names of any
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
.. _arrays:
*************
Array objects
*************
.. currentmodule:: numpy
NumPy provides an N-dimensional array type, the :ref:`ndarray
<arrays.ndarray>`, which describes a collection of "items" of the same
type. The items can be :ref:`indexed <arrays.indexing>` using for
example N integers.
All ndarrays are :term:`homogenous`: every item takes up the same size
block of memory, and all blocks are interpreted in exactly the same
way. How each item in the array is to be interpreted is specified by a
separate :ref:`data-type object <arrays.dtypes>`, one of which is associated
with every array. In addition to basic types (integers, floats,
*etc.*), the data type objects can also represent data structures.
An item extracted from an array, *e.g.*, by indexing, is represented
by a Python object whose type is one of the :ref:`array scalar types
<arrays.scalars>` built in Numpy. The array scalars allow easy manipulation
of also more complicated arrangements of data.
.. figure:: figures/threefundamental.png
**Figure**
Conceptual diagram showing the relationship between the three
fundamental objects used to describe the data in an array: 1) the
ndarray itself, 2) the data-type object that describes the layout
of a single fixed-size element of the array, 3) the array-scalar
Python object that is returned when a single element of the array
is accessed.
.. toctree::
:maxdepth: 2
arrays.ndarray
arrays.scalars
arrays.dtypes
arrays.indexing
arrays.classes
maskedarray
arrays.interface
This source diff could not be displayed because it is too large. You can view the blob instead.
System configuration
====================
.. sectionauthor:: Travis E. Oliphant
When NumPy is built, information about system configuration is
recorded, and is made available for extension modules using Numpy's C
API. These are mostly defined in ``numpyconfig.h`` (included in
``ndarrayobject.h``). The public symbols are prefixed by ``NPY_*``.
Numpy also offers some functions for querying information about the
platform in use.
For private use, Numpy also constructs a ``config.h`` in the NumPy
include directory, which is not exported by Numpy (that is a python
extension which use the numpy C API will not see those symbols), to
avoid namespace pollution.
Data type sizes
---------------
The :cdata:`NPY_SIZEOF_{CTYPE}` constants are defined so that sizeof
information is available to the pre-processor.
.. cvar:: NPY_SIZEOF_SHORT
sizeof(short)
.. cvar:: NPY_SIZEOF_INT
sizeof(int)
.. cvar:: NPY_SIZEOF_LONG
sizeof(long)
.. cvar:: NPY_SIZEOF_LONG_LONG
sizeof(longlong) where longlong is defined appropriately on the
platform (A macro defines **NPY_SIZEOF_LONGLONG** as well.)
.. cvar:: NPY_SIZEOF_PY_LONG_LONG
.. cvar:: NPY_SIZEOF_FLOAT
sizeof(float)
.. cvar:: NPY_SIZEOF_DOUBLE
sizeof(double)
.. cvar:: NPY_SIZEOF_LONG_DOUBLE
sizeof(longdouble) (A macro defines **NPY_SIZEOF_LONGDOUBLE** as well.)
.. cvar:: NPY_SIZEOF_PY_INTPTR_T
Size of a pointer on this platform (sizeof(void \*)) (A macro defines
NPY_SIZEOF_INTP as well.)
Platform information
--------------------
.. cvar:: NPY_CPU_X86
.. cvar:: NPY_CPU_AMD64
.. cvar:: NPY_CPU_IA64
.. cvar:: NPY_CPU_PPC
.. cvar:: NPY_CPU_PPC64
.. cvar:: NPY_CPU_SPARC
.. cvar:: NPY_CPU_SPARC64
.. cvar:: NPY_CPU_S390
.. cvar:: NPY_CPU_PARISC
.. versionadded:: 1.3.0
CPU architecture of the platform; only one of the above is
defined.
Defined in ``numpy/npy_cpu.h``
.. cvar:: NPY_LITTLE_ENDIAN
.. cvar:: NPY_BIG_ENDIAN
.. cvar:: NPY_BYTE_ORDER
.. versionadded:: 1.3.0
Portable alternatives to the ``endian.h`` macros of GNU Libc.
If big endian, :cdata:`NPY_BYTE_ORDER` == :cdata:`NPY_BIG_ENDIAN`, and
similarly for little endian architectures.
Defined in ``numpy/npy_endian.h``.
.. cfunction:: PyArray_GetEndianness()
.. versionadded:: 1.3.0
Returns the endianness of the current platform.
One of :cdata:`NPY_CPU_BIG`, :cdata:`NPY_CPU_LITTLE`,
or :cdata:`NPY_CPU_UNKNOWN_ENDIAN`.
Data Type API
=============
.. sectionauthor:: Travis E. Oliphant
The standard array can have 24 different data types (and has some
support for adding your own types). These data types all have an
enumerated type, an enumerated type-character, and a corresponding
array scalar Python type object (placed in a hierarchy). There are
also standard C typedefs to make it easier to manipulate elements of
the given data type. For the numeric types, there are also bit-width
equivalent C typedefs and named typenumbers that make it easier to
select the precision desired.
.. warning::
The names for the types in c code follows c naming conventions
more closely. The Python names for these types follow Python
conventions. Thus, :cdata:`NPY_FLOAT` picks up a 32-bit float in
C, but :class:`numpy.float_` in Python corresponds to a 64-bit
double. The bit-width names can be used in both Python and C for
clarity.
Enumerated Types
----------------
There is a list of enumerated types defined providing the basic 24
data types plus some useful generic names. Whenever the code requires
a type number, one of these enumerated types is requested. The types
are all called :cdata:`NPY_{NAME}` where ``{NAME}`` can be
**BOOL**, **BYTE**, **UBYTE**, **SHORT**, **USHORT**, **INT**,
**UINT**, **LONG**, **ULONG**, **LONGLONG**, **ULONGLONG**,
**HALF**, **FLOAT**, **DOUBLE**, **LONGDOUBLE**, **CFLOAT**,
**CDOUBLE**, **CLONGDOUBLE**, **DATETIME**, **TIMEDELTA**,
**OBJECT**, **STRING**, **UNICODE**, **VOID**
**NTYPES**, **NOTYPE**, **USERDEF**, **DEFAULT_TYPE**
The various character codes indicating certain types are also part of
an enumerated list. References to type characters (should they be
needed at all) should always use these enumerations. The form of them
is :cdata:`NPY_{NAME}LTR` where ``{NAME}`` can be
**BOOL**, **BYTE**, **UBYTE**, **SHORT**, **USHORT**, **INT**,
**UINT**, **LONG**, **ULONG**, **LONGLONG**, **ULONGLONG**,
**HALF**, **FLOAT**, **DOUBLE**, **LONGDOUBLE**, **CFLOAT**,
**CDOUBLE**, **CLONGDOUBLE**, **DATETIME**, **TIMEDELTA**,
**OBJECT**, **STRING**, **VOID**
**INTP**, **UINTP**
**GENBOOL**, **SIGNED**, **UNSIGNED**, **FLOATING**, **COMPLEX**
The latter group of ``{NAME}s`` corresponds to letters used in the array
interface typestring specification.
Defines
-------
Max and min values for integers
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. cvar:: NPY_MAX_INT{bits}
.. cvar:: NPY_MAX_UINT{bits}
.. cvar:: NPY_MIN_INT{bits}
These are defined for ``{bits}`` = 8, 16, 32, 64, 128, and 256 and provide
the maximum (minimum) value of the corresponding (unsigned) integer
type. Note: the actual integer type may not be available on all
platforms (i.e. 128-bit and 256-bit integers are rare).
.. cvar:: NPY_MIN_{type}
This is defined for ``{type}`` = **BYTE**, **SHORT**, **INT**,
**LONG**, **LONGLONG**, **INTP**
.. cvar:: NPY_MAX_{type}
This is defined for all defined for ``{type}`` = **BYTE**, **UBYTE**,
**SHORT**, **USHORT**, **INT**, **UINT**, **LONG**, **ULONG**,
**LONGLONG**, **ULONGLONG**, **INTP**, **UINTP**
Number of bits in data types
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
All :cdata:`NPY_SIZEOF_{CTYPE}` constants have corresponding
:cdata:`NPY_BITSOF_{CTYPE}` constants defined. The :cdata:`NPY_BITSOF_{CTYPE}`
constants provide the number of bits in the data type. Specifically,
the available ``{CTYPE}s`` are
**BOOL**, **CHAR**, **SHORT**, **INT**, **LONG**,
**LONGLONG**, **FLOAT**, **DOUBLE**, **LONGDOUBLE**
Bit-width references to enumerated typenums
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
All of the numeric data types (integer, floating point, and complex)
have constants that are defined to be a specific enumerated type
number. Exactly which enumerated type a bit-width type refers to is
platform dependent. In particular, the constants available are
:cdata:`PyArray_{NAME}{BITS}` where ``{NAME}`` is **INT**, **UINT**,
**FLOAT**, **COMPLEX** and ``{BITS}`` can be 8, 16, 32, 64, 80, 96, 128,
160, 192, 256, and 512. Obviously not all bit-widths are available on
all platforms for all the kinds of numeric types. Commonly 8-, 16-,
32-, 64-bit integers; 32-, 64-bit floats; and 64-, 128-bit complex
types are available.
Integer that can hold a pointer
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The constants **PyArray_INTP** and **PyArray_UINTP** refer to an
enumerated integer type that is large enough to hold a pointer on the
platform. Index arrays should always be converted to **PyArray_INTP**
, because the dimension of the array is of type npy_intp.
C-type names
------------
There are standard variable types for each of the numeric data types
and the bool data type. Some of these are already available in the
C-specification. You can create variables in extension code with these
types.
Boolean
^^^^^^^
.. ctype:: npy_bool
unsigned char; The constants :cdata:`NPY_FALSE` and
:cdata:`NPY_TRUE` are also defined.
(Un)Signed Integer
^^^^^^^^^^^^^^^^^^
Unsigned versions of the integers can be defined by pre-pending a 'u'
to the front of the integer name.
.. ctype:: npy_(u)byte
(unsigned) char
.. ctype:: npy_(u)short
(unsigned) short
.. ctype:: npy_(u)int
(unsigned) int
.. ctype:: npy_(u)long
(unsigned) long int
.. ctype:: npy_(u)longlong
(unsigned long long int)
.. ctype:: npy_(u)intp
(unsigned) Py_intptr_t (an integer that is the size of a pointer on
the platform).
(Complex) Floating point
^^^^^^^^^^^^^^^^^^^^^^^^
.. ctype:: npy_(c)float
float
.. ctype:: npy_(c)double
double
.. ctype:: npy_(c)longdouble
long double
complex types are structures with **.real** and **.imag** members (in
that order).
Bit-width names
^^^^^^^^^^^^^^^
There are also typedefs for signed integers, unsigned integers,
floating point, and complex floating point types of specific bit-
widths. The available type names are
:ctype:`npy_int{bits}`, :ctype:`npy_uint{bits}`, :ctype:`npy_float{bits}`,
and :ctype:`npy_complex{bits}`
where ``{bits}`` is the number of bits in the type and can be **8**,
**16**, **32**, **64**, 128, and 256 for integer types; 16, **32**
, **64**, 80, 96, 128, and 256 for floating-point types; and 32,
**64**, **128**, 160, 192, and 512 for complex-valued types. Which
bit-widths are available is platform dependent. The bolded bit-widths
are usually available on all platforms.
Printf Formatting
-----------------
For help in printing, the following strings are defined as the correct
format specifier in printf and related commands.
:cdata:`NPY_LONGLONG_FMT`, :cdata:`NPY_ULONGLONG_FMT`,
:cdata:`NPY_INTP_FMT`, :cdata:`NPY_UINTP_FMT`,
:cdata:`NPY_LONGDOUBLE_FMT`
==================================
Generalized Universal Function API
==================================
There is a general need for looping over not only functions on scalars
but also over functions on vectors (or arrays), as explained on
http://scipy.org/scipy/numpy/wiki/GeneralLoopingFunctions. We propose
to realize this concept by generalizing the universal functions
(ufuncs), and provide a C implementation that adds ~500 lines
to the numpy code base. In current (specialized) ufuncs, the elementary
function is limited to element-by-element operations, whereas the
generalized version supports "sub-array" by "sub-array" operations.
The Perl vector library PDL provides a similar functionality and its
terms are re-used in the following.
Each generalized ufunc has information associated with it that states
what the "core" dimensionality of the inputs is, as well as the
corresponding dimensionality of the outputs (the element-wise ufuncs
have zero core dimensions). The list of the core dimensions for all
arguments is called the "signature" of a ufunc. For example, the
ufunc numpy.add has signature ``(),()->()`` defining two scalar inputs
and one scalar output.
Another example is (see the GeneralLoopingFunctions page) the function
``inner1d(a,b)`` with a signature of ``(i),(i)->()``. This applies the
inner product along the last axis of each input, but keeps the
remaining indices intact. For example, where ``a`` is of shape ``(3,5,N)``
and ``b`` is of shape ``(5,N)``, this will return an output of shape ``(3,5)``.
The underlying elementary function is called 3*5 times. In the
signature, we specify one core dimension ``(i)`` for each input and zero core
dimensions ``()`` for the output, since it takes two 1-d arrays and
returns a scalar. By using the same name ``i``, we specify that the two
corresponding dimensions should be of the same size (or one of them is
of size 1 and will be broadcasted).
The dimensions beyond the core dimensions are called "loop" dimensions. In
the above example, this corresponds to ``(3,5)``.
The usual numpy "broadcasting" rules apply, where the signature
determines how the dimensions of each input/output object are split
into core and loop dimensions:
#. While an input array has a smaller dimensionality than the corresponding
number of core dimensions, 1's are pre-pended to its shape.
#. The core dimensions are removed from all inputs and the remaining
dimensions are broadcasted; defining the loop dimensions.
#. The output is given by the loop dimensions plus the output core dimensions.
Definitions
-----------
Elementary Function
Each ufunc consists of an elementary function that performs the
most basic operation on the smallest portion of array arguments
(e.g. adding two numbers is the most basic operation in adding two
arrays). The ufunc applies the elementary function multiple times
on different parts of the arrays. The input/output of elementary
functions can be vectors; e.g., the elementary function of inner1d
takes two vectors as input.
Signature
A signature is a string describing the input/output dimensions of
the elementary function of a ufunc. See section below for more
details.
Core Dimension
The dimensionality of each input/output of an elementary function
is defined by its core dimensions (zero core dimensions correspond
to a scalar input/output). The core dimensions are mapped to the
last dimensions of the input/output arrays.
Dimension Name
A dimension name represents a core dimension in the signature.
Different dimensions may share a name, indicating that they are of
the same size (or are broadcastable).
Dimension Index
A dimension index is an integer representing a dimension name. It
enumerates the dimension names according to the order of the first
occurrence of each name in the signature.
Details of Signature
--------------------
The signature defines "core" dimensionality of input and output
variables, and thereby also defines the contraction of the
dimensions. The signature is represented by a string of the
following format:
* Core dimensions of each input or output array are represented by a
list of dimension names in parentheses, ``(i_1,...,i_N)``; a scalar
input/output is denoted by ``()``. Instead of ``i_1``, ``i_2``,
etc, one can use any valid Python variable name.
* Dimension lists for different arguments are separated by ``","``.
Input/output arguments are separated by ``"->"``.
* If one uses the same dimension name in multiple locations, this
enforces the same size (or broadcastable size) of the corresponding
dimensions.
The formal syntax of signatures is as follows::
<Signature> ::= <Input arguments> "->" <Output arguments>
<Input arguments> ::= <Argument list>
<Output arguments> ::= <Argument list>
<Argument list> ::= nil | <Argument> | <Argument> "," <Argument list>
<Argument> ::= "(" <Core dimension list> ")"
<Core dimension list> ::= nil | <Dimension name> |
<Dimension name> "," <Core dimension list>
<Dimension name> ::= valid Python variable name
Notes:
#. All quotes are for clarity.
#. Core dimensions that share the same name must be broadcastable, as
the two ``i`` in our example above. Each dimension name typically
corresponding to one level of looping in the elementary function's
implementation.
#. White spaces are ignored.
Here are some examples of signatures:
+-------------+------------------------+-----------------------------------+
| add | ``(),()->()`` | |
+-------------+------------------------+-----------------------------------+
| inner1d | ``(i),(i)->()`` | |
+-------------+------------------------+-----------------------------------+
| sum1d | ``(i)->()`` | |
+-------------+------------------------+-----------------------------------+
| dot2d | ``(m,n),(n,p)->(m,p)`` | matrix multiplication |
+-------------+------------------------+-----------------------------------+
| outer_inner | ``(i,t),(j,t)->(i,j)`` | inner over the last dimension, |
| | | outer over the second to last, |
| | | and loop/broadcast over the rest. |
+-------------+------------------------+-----------------------------------+
C-API for implementing Elementary Functions
-------------------------------------------
The current interface remains unchanged, and ``PyUFunc_FromFuncAndData``
can still be used to implement (specialized) ufuncs, consisting of
scalar elementary functions.
One can use ``PyUFunc_FromFuncAndDataAndSignature`` to declare a more
general ufunc. The argument list is the same as
``PyUFunc_FromFuncAndData``, with an additional argument specifying the
signature as C string.
Furthermore, the callback function is of the same type as before,
``void (*foo)(char **args, intp *dimensions, intp *steps, void *func)``.
When invoked, ``args`` is a list of length ``nargs`` containing
the data of all input/output arguments. For a scalar elementary
function, ``steps`` is also of length ``nargs``, denoting the strides used
for the arguments. ``dimensions`` is a pointer to a single integer
defining the size of the axis to be looped over.
For a non-trivial signature, ``dimensions`` will also contain the sizes
of the core dimensions as well, starting at the second entry. Only
one size is provided for each unique dimension name and the sizes are
given according to the first occurrence of a dimension name in the
signature.
The first ``nargs`` elements of ``steps`` remain the same as for scalar
ufuncs. The following elements contain the strides of all core
dimensions for all arguments in order.
For example, consider a ufunc with signature ``(i,j),(i)->()``. In
this case, ``args`` will contain three pointers to the data of the
input/output arrays ``a``, ``b``, ``c``. Furthermore, ``dimensions`` will be
``[N, I, J]`` to define the size of ``N`` of the loop and the sizes ``I`` and ``J``
for the core dimensions ``i`` and ``j``. Finally, ``steps`` will be
``[a_N, b_N, c_N, a_i, a_j, b_i]``, containing all necessary strides.
.. _c-api:
###########
Numpy C-API
###########
.. sectionauthor:: Travis E. Oliphant
| Beware of the man who won't be bothered with details.
| --- *William Feather, Sr.*
| The truth is out there.
| --- *Chris Carter, The X Files*
NumPy provides a C-API to enable users to extend the system and get
access to the array object for use in other routines. The best way to
truly understand the C-API is to read the source code. If you are
unfamiliar with (C) source code, however, this can be a daunting
experience at first. Be assured that the task becomes easier with
practice, and you may be surprised at how simple the C-code can be to
understand. Even if you don't think you can write C-code from scratch,
it is much easier to understand and modify already-written source code
then create it *de novo*.
Python extensions are especially straightforward to understand because
they all have a very similar structure. Admittedly, NumPy is not a
trivial extension to Python, and may take a little more snooping to
grasp. This is especially true because of the code-generation
techniques, which simplify maintenance of very similar code, but can
make the code a little less readable to beginners. Still, with a
little persistence, the code can be opened to your understanding. It
is my hope, that this guide to the C-API can assist in the process of
becoming familiar with the compiled-level work that can be done with
NumPy in order to squeeze that last bit of necessary speed out of your
code.
.. currentmodule:: numpy-c-api
.. toctree::
:maxdepth: 2
c-api.types-and-structures
c-api.config
c-api.dtype
c-api.array
c-api.iterator
c-api.ufunc
c-api.generalized-ufuncs
c-api.coremath
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment