summaryrefslogtreecommitdiffstats
path: root/Doc/library/packaging.pypi.simple.rst
blob: ea5edca9c12a7ce266c551f9442d2f0efed15e2c (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
:mod:`packaging.pypi.simple` --- Crawler using the PyPI "simple" interface
==========================================================================

.. module:: packaging.pypi.simple
   :synopsis: Crawler using the screen-scraping "simple" interface to fetch info
              and distributions.


`packaging.pypi.simple` can process Python Package Indexes  and provides
useful information about distributions. It also can crawl local indexes, for
instance.

You should use `packaging.pypi.simple` for:

    * Search distributions by name and versions.
    * Process index external pages.
    * Download distributions by name and versions.

And should not be used for:

    * Things that will end up in too long index processing (like "finding all
      distributions with a specific version, no matters the name")


API
---

.. class:: Crawler


Usage Exemples
---------------

To help you understand how using the `Crawler` class, here are some basic
usages.

Request the simple index to get a specific distribution
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Supposing you want to scan an index to get a list of distributions for
the "foobar" project. You can use the "get_releases" method for that.
The get_releases method will browse the project page, and return
:class:`ReleaseInfo`  objects for each found link that rely on downloads. ::

   >>> from packaging.pypi.simple import Crawler
   >>> crawler = Crawler()
   >>> crawler.get_releases("FooBar")
   [<ReleaseInfo "Foobar 1.1">, <ReleaseInfo "Foobar 1.2">]


Note that you also can request the client about specific versions, using version
specifiers (described in `PEP 345
<http://www.python.org/dev/peps/pep-0345/#version-specifiers>`_)::

   >>> client.get_releases("FooBar < 1.2")
   [<ReleaseInfo "FooBar 1.1">, ]


`get_releases` returns a list of :class:`ReleaseInfo`, but you also can get the
best distribution that fullfil your requirements, using "get_release"::

   >>> client.get_release("FooBar < 1.2")
   <ReleaseInfo "FooBar 1.1">


Download distributions
^^^^^^^^^^^^^^^^^^^^^^

As it can get the urls of distributions provided by PyPI, the `Crawler`
client also can download the distributions and put it for you in a temporary
destination::

   >>> client.download("foobar")
   /tmp/temp_dir/foobar-1.2.tar.gz


You also can specify the directory you want to download to::

   >>> client.download("foobar", "/path/to/my/dir")
   /path/to/my/dir/foobar-1.2.tar.gz


While downloading, the md5 of the archive will be checked, if not matches, it
will try another time, then if fails again, raise `MD5HashDoesNotMatchError`.

Internally, that's not the Crawler which download the distributions, but the
`DistributionInfo` class. Please refer to this documentation for more details.


Following PyPI external links
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The default behavior for packaging is to *not* follow the links provided
by HTML pages in the "simple index", to find distributions related
downloads.

It's possible to tell the PyPIClient to follow external links by setting the
`follow_externals` attribute, on instantiation or after::

   >>> client = Crawler(follow_externals=True)

or ::

   >>> client = Crawler()
   >>> client.follow_externals = True


Working with external indexes, and mirrors
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The default `Crawler` behavior is to rely on the Python Package index stored
on PyPI (http://pypi.python.org/simple).

As you can need to work with a local index, or private indexes, you can specify
it using the index_url parameter::

   >>> client = Crawler(index_url="file://filesystem/path/")

or ::

   >>> client = Crawler(index_url="http://some.specific.url/")


You also can specify mirrors to fallback on in case the first index_url you
provided doesnt respond, or not correctly. The default behavior for
`Crawler` is to use the list provided by Python.org DNS records, as
described in the :PEP:`381` about mirroring infrastructure.

If you don't want to rely on these, you could specify the list of mirrors you
want to try by specifying the `mirrors` attribute. It's a simple iterable::

   >>> mirrors = ["http://first.mirror","http://second.mirror"]
   >>> client = Crawler(mirrors=mirrors)


Searching in the simple index
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

It's possible to search for projects with specific names in the package index.
Assuming you want to find all projects containing the "distutils" keyword::

   >>> c.search_projects("distutils")
   [<Project "collective.recipe.distutils">, <Project "Distutils">, <Project
   "Packaging">, <Project "distutilscross">, <Project "lpdistutils">, <Project
   "taras.recipe.distutils">, <Project "zerokspot.recipe.distutils">]


You can also search the projects starting with a specific text, or ending with
that text, using a wildcard::

   >>> c.search_projects("distutils*")
   [<Project "Distutils">, <Project "Packaging">, <Project "distutilscross">]

   >>> c.search_projects("*distutils")
   [<Project "collective.recipe.distutils">, <Project "Distutils">, <Project
   "lpdistutils">, <Project "taras.recipe.distutils">, <Project
   "zerokspot.recipe.distutils">]