summaryrefslogtreecommitdiffstats
path: root/Doc/faq/programming.rst
blob: c156390d375e10332e69f5d98b82825805827e28 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
:tocdepth: 2

===============
Programming FAQ
===============

.. only:: html

   .. contents::

General Questions
=================

Is there a source code level debugger with breakpoints, single-stepping, etc.?
------------------------------------------------------------------------------

Yes.

Several debuggers for Python are described below, and the built-in function
:func:`breakpoint` allows you to drop into any of them.

The pdb module is a simple but adequate console-mode debugger for Python. It is
part of the standard Python library, and is :mod:`documented in the Library
Reference Manual <pdb>`. You can also write your own debugger by using the code
for pdb as an example.

The IDLE interactive development environment, which is part of the standard
Python distribution (normally available as Tools/scripts/idle), includes a
graphical debugger.

PythonWin is a Python IDE that includes a GUI debugger based on pdb.  The
PythonWin debugger colors breakpoints and has quite a few cool features such as
debugging non-PythonWin programs.  PythonWin is available as part of
`pywin32 <https://github.com/mhammond/pywin32>`_ project and
as a part of the
`ActivePython <https://www.activestate.com/products/python/>`_ distribution.

`Eric <http://eric-ide.python-projects.org/>`_ is an IDE built on PyQt
and the Scintilla editing component.

`trepan3k <https://github.com/rocky/python3-trepan/>`_ is a gdb-like debugger.

`Visual Studio Code <https://code.visualstudio.com/>`_ is an IDE with debugging
tools that integrates with version-control software.

There are a number of commercial Python IDEs that include graphical debuggers.
They include:

* `Wing IDE <https://wingware.com/>`_
* `Komodo IDE <https://www.activestate.com/products/komodo-ide/>`_
* `PyCharm <https://www.jetbrains.com/pycharm/>`_


Are there tools to help find bugs or perform static analysis?
-------------------------------------------------------------

Yes.

`Pylint <https://www.pylint.org/>`_ and
`Pyflakes <https://github.com/PyCQA/pyflakes>`_ do basic checking that will
help you catch bugs sooner.

Static type checkers such as `Mypy <http://mypy-lang.org/>`_,
`Pyre <https://pyre-check.org/>`_, and
`Pytype <https://github.com/google/pytype>`_ can check type hints in Python
source code.


.. _faq-create-standalone-binary:

How can I create a stand-alone binary from a Python script?
-----------------------------------------------------------

You don't need the ability to compile Python to C code if all you want is a
stand-alone program that users can download and run without having to install
the Python distribution first.  There are a number of tools that determine the
set of modules required by a program and bind these modules together with a
Python binary to produce a single executable.

One is to use the freeze tool, which is included in the Python source tree as
``Tools/freeze``. It converts Python byte code to C arrays; a C compiler you can
embed all your modules into a new program, which is then linked with the
standard Python modules.

It works by scanning your source recursively for import statements (in both
forms) and looking for the modules in the standard Python path as well as in the
source directory (for built-in modules).  It then turns the bytecode for modules
written in Python into C code (array initializers that can be turned into code
objects using the marshal module) and creates a custom-made config file that
only contains those built-in modules which are actually used in the program.  It
then compiles the generated C code and links it with the rest of the Python
interpreter to form a self-contained binary which acts exactly like your script.

The following packages can help with the creation of console and GUI
executables:

* `Nuitka <https://nuitka.net/>`_ (Cross-platform)
* `PyInstaller <http://www.pyinstaller.org/>`_ (Cross-platform)
* `PyOxidizer <https://pyoxidizer.readthedocs.io/en/stable/>`_ (Cross-platform)
* `cx_Freeze <https://marcelotduarte.github.io/cx_Freeze/>`_ (Cross-platform)
* `py2app <https://github.com/ronaldoussoren/py2app>`_ (macOS only)
* `py2exe <http://www.py2exe.org/>`_ (Windows only)

Are there coding standards or a style guide for Python programs?
----------------------------------------------------------------

Yes.  The coding style required for standard library modules is documented as
:pep:`8`.


Core Language
=============

Why am I getting an UnboundLocalError when the variable has a value?
--------------------------------------------------------------------

It can be a surprise to get the UnboundLocalError in previously working
code when it is modified by adding an assignment statement somewhere in
the body of a function.

This code:

   >>> x = 10
   >>> def bar():
   ...     print(x)
   >>> bar()
   10

works, but this code:

   >>> x = 10
   >>> def foo():
   ...     print(x)
   ...     x += 1

results in an UnboundLocalError:

   >>> foo()
   Traceback (most recent call last):
     ...
   UnboundLocalError: local variable 'x' referenced before assignment

This is because when you make an assignment to a variable in a scope, that
variable becomes local to that scope and shadows any similarly named variable
in the outer scope.  Since the last statement in foo assigns a new value to
``x``, the compiler recognizes it as a local variable.  Consequently when the
earlier ``print(x)`` attempts to print the uninitialized local variable and
an error results.

In the example above you can access the outer scope variable by declaring it
global:

   >>> x = 10
   >>> def foobar():
   ...     global x
   ...     print(x)
   ...     x += 1
   >>> foobar()
   10

This explicit declaration is required in order to remind you that (unlike the
superficially analogous situation with class and instance variables) you are
actually modifying the value of the variable in the outer scope:

   >>> print(x)
   11

You can do a similar thing in a nested scope using the :keyword:`nonlocal`
keyword:

   >>> def foo():
   ...    x = 10
   ...    def bar():
   ...        nonlocal x
   ...        print(x)
   ...        x += 1
   ...    bar()
   ...    print(x)
   >>> foo()
   10
   11


What are the rules for local and global variables in Python?
------------------------------------------------------------

In Python, variables that are only referenced inside a function are implicitly
global.  If a variable is assigned a value anywhere within the function's body,
it's assumed to be a local unless explicitly declared as global.

Though a bit surprising at first, a moment's consideration explains this.  On
one hand, requiring :keyword:`global` for assigned variables provides a bar
against unintended side-effects.  On the other hand, if ``global`` was required
for all global references, you'd be using ``global`` all the time.  You'd have
to declare as global every reference to a built-in function or to a component of
an imported module.  This clutter would defeat the usefulness of the ``global``
declaration for identifying side-effects.


Why do lambdas defined in a loop with different values all return the same result?
----------------------------------------------------------------------------------

Assume you use a for loop to define a few different lambdas (or even plain
functions), e.g.::

   >>> squares = []
   >>> for x in range(5):
   ...     squares.append(lambda: x**2)

This gives you a list that contains 5 lambdas that calculate ``x**2``.  You
might expect that, when called, they would return, respectively, ``0``, ``1``,
``4``, ``9``, and ``16``.  However, when you actually try you will see that
they all return ``16``::

   >>> squares[2]()
   16
   >>> squares[4]()
   16

This happens because ``x`` is not local to the lambdas, but is defined in
the outer scope, and it is accessed when the lambda is called --- not when it
is defined.  At the end of the loop, the value of ``x`` is ``4``, so all the
functions now return ``4**2``, i.e. ``16``.  You can also verify this by
changing the value of ``x`` and see how the results of the lambdas change::

   >>> x = 8
   >>> squares[2]()
   64

In order to avoid this, you need to save the values in variables local to the
lambdas, so that they don't rely on the value of the global ``x``::

   >>> squares = []
   >>> for x in range(5):
   ...     squares.append(lambda n=x: n**2)

Here, ``n=x`` creates a new variable ``n`` local to the lambda and computed
when the lambda is defined so that it has the same value that ``x`` had at
that point in the loop.  This means that the value of ``n`` will be ``0``
in the first lambda, ``1`` in the second, ``2`` in the third, and so on.
Therefore each lambda will now return the correct result::

   >>> squares[2]()
   4
   >>> squares[4]()
   16

Note that this behaviour is not peculiar to lambdas, but applies to regular
functions too.


How do I share global variables across modules?
------------------------------------------------

The canonical way to share information across modules within a single program is
to create a special module (often called config or cfg).  Just import the config
module in all modules of your application; the module then becomes available as
a global name.  Because there is only one instance of each module, any changes
made to the module object get reflected everywhere.  For example:

config.py::

   x = 0   # Default value of the 'x' configuration setting

mod.py::

   import config
   config.x = 1

main.py::

   import config
   import mod
   print(config.x)

Note that using a module is also the basis for implementing the Singleton design
pattern, for the same reason.


What are the "best practices" for using import in a module?
-----------------------------------------------------------

In general, don't use ``from modulename import *``.  Doing so clutters the
importer's namespace, and makes it much harder for linters to detect undefined
names.

Import modules at the top of a file.  Doing so makes it clear what other modules
your code requires and avoids questions of whether the module name is in scope.
Using one import per line makes it easy to add and delete module imports, but
using multiple imports per line uses less screen space.

It's good practice if you import modules in the following order:

1. standard library modules -- e.g. ``sys``, ``os``, ``getopt``, ``re``
2. third-party library modules (anything installed in Python's site-packages
   directory) -- e.g. mx.DateTime, ZODB, PIL.Image, etc.
3. locally-developed modules

It is sometimes necessary to move imports to a function or class to avoid
problems with circular imports.  Gordon McMillan says:

   Circular imports are fine where both modules use the "import <module>" form
   of import.  They fail when the 2nd module wants to grab a name out of the
   first ("from module import name") and the import is at the top level.  That's
   because names in the 1st are not yet available, because the first module is
   busy importing the 2nd.

In this case, if the second module is only used in one function, then the import
can easily be moved into that function.  By the time the import is called, the
first module will have finished initializing, and the second module can do its
import.

It may also be necessary to move imports out of the top level of code if some of
the modules are platform-specific.  In that case, it may not even be possible to
import all of the modules at the top of the file.  In this case, importing the
correct modules in the corresponding platform-specific code is a good option.

Only move imports into a local scope, such as inside a function definition, if
it's necessary to solve a problem such as avoiding a circular import or are
trying to reduce the initialization time of a module.  This technique is
especially helpful if many of the imports are unnecessary depending on how the
program executes.  You may also want to move imports into a function if the
modules are only ever used in that function.  Note that loading a module the
first time may be expensive because of the one time initialization of the
module, but loading a module multiple times is virtually free, costing only a
couple of dictionary lookups.  Even if the module name has gone out of scope,
the module is probably available in :data:`sys.modules`.


Why are default values shared between objects?
----------------------------------------------

This type of bug commonly bites neophyte programmers.  Consider this function::

   def foo(mydict={}):  # Danger: shared reference to one dict for all calls
       ... compute something ...
       mydict[key] = value
       return mydict

The first time you call this function, ``mydict`` contains a single item.  The
second time, ``mydict`` contains two items because when ``foo()`` begins
executing, ``mydict`` starts out with an item already in it.

It is often expected that a function call creates new objects for default
values. This is not what happens. Default values are created exactly once, when
the function is defined.  If that object is changed, like the dictionary in this
example, subsequent calls to the function will refer to this changed object.

By definition, immutable objects such as numbers, strings, tuples, and ``None``,
are safe from change. Changes to mutable objects such as dictionaries, lists,
and class instances can lead to confusion.

Because of this feature, it is good programming practice to not use mutable
objects as default values.  Instead, use ``None`` as the default value and
inside the function, check if the parameter is ``None`` and create a new
list/dictionary/whatever if it is.  For example, don't write::

   def foo(mydict={}):
       ...

but::

   def foo(mydict=None):
       if mydict is None:
           mydict = {}  # create a new dict for local namespace

This feature can be useful.  When you have a function that's time-consuming to
compute, a common technique is to cache the parameters and the resulting value
of each call to the function, and return the cached value if the same value is
requested again.  This is called "memoizing", and can be implemented like this::

   # Callers can only provide two parameters and optionally pass _cache by keyword
   def expensive(arg1, arg2, *, _cache={}):
       if (arg1, arg2) in _cache:
           return _cache[(arg1, arg2)]

       # Calculate the value
       result = ... expensive computation ...
       _cache[(arg1, arg2)] = result           # Store result in the cache
       return result

You could use a global variable containing a dictionary instead of the default
value; it's a matter of taste.


How can I pass optional or keyword parameters from one function to another?
---------------------------------------------------------------------------

Collect the arguments using the ``*`` and ``**`` specifiers in the function's
parameter list; this gives you the positional arguments as a tuple and the
keyword arguments as a dictionary.  You can then pass these arguments when
calling another function by using ``*`` and ``**``::

   def f(x, *args, **kwargs):
       ...
       kwargs['width'] = '14.3c'
       ...
       g(x, *args, **kwargs)


.. index::
   single: argument; difference from parameter
   single: parameter; difference from argument

.. _faq-argument-vs-parameter:

What is the difference between arguments and parameters?
--------------------------------------------------------

:term:`Parameters <parameter>` are defined by the names that appear in a
function definition, whereas :term:`arguments <argument>` are the values
actually passed to a function when calling it.  Parameters define what types of
arguments a function can accept.  For example, given the function definition::

   def func(foo, bar=None, **kwargs):
       pass

*foo*, *bar* and *kwargs* are parameters of ``func``.  However, when calling
``func``, for example::

   func(42, bar=314, extra=somevar)

the values ``42``, ``314``, and ``somevar`` are arguments.


Why did changing list 'y' also change list 'x'?
------------------------------------------------

If you wrote code like::

   >>> x = []
   >>> y = x
   >>> y.append(10)
   >>> y
   [10]
   >>> x
   [10]

you might be wondering why appending an element to ``y`` changed ``x`` too.

There are two factors that produce this result:

1) Variables are simply names that refer to objects.  Doing ``y = x`` doesn't
   create a copy of the list -- it creates a new variable ``y`` that refers to
   the same object ``x`` refers to.  This means that there is only one object
   (the list), and both ``x`` and ``y`` refer to it.
2) Lists are :term:`mutable`, which means that you can change their content.

After the call to :meth:`~list.append`, the content of the mutable object has
changed from ``[]`` to ``[10]``.  Since both the variables refer to the same
object, using either name accesses the modified value ``[10]``.

If we instead assign an immutable object to ``x``::

   >>> x = 5  # ints are immutable
   >>> y = x
   >>> x = x + 1  # 5 can't be mutated, we are creating a new object here
   >>> x
   6
   >>> y
   5

we can see that in this case ``x`` and ``y`` are not equal anymore.  This is
because integers are :term:`immutable`, and when we do ``x = x + 1`` we are not
mutating the int ``5`` by incrementing its value; instead, we are creating a
new object (the int ``6``) and assigning it to ``x`` (that is, changing which
object ``x`` refers to).  After this assignment we have two objects (the ints
``6`` and ``5``) and two variables that refer to them (``x`` now refers to
``6`` but ``y`` still refers to ``5``).

Some operations (for example ``y.append(10)`` and ``y.sort()``) mutate the
object, whereas superficially similar operations (for example ``y = y + [10]``
and ``sorted(y)``) create a new object.  In general in Python (and in all cases
in the standard library) a method that mutates an object will return ``None``
to help avoid getting the two types of operations confused.  So if you
mistakenly write ``y.sort()`` thinking it will give you a sorted copy of ``y``,
you'll instead end up with ``None``, which will likely cause your program to
generate an easily diagnosed error.

However, there is one class of operations where the same operation sometimes
has different behaviors with different types:  the augmented assignment
operators.  For example, ``+=`` mutates lists but not tuples or ints (``a_list
+= [1, 2, 3]`` is equivalent to ``a_list.extend([1, 2, 3])`` and mutates
``a_list``, whereas ``some_tuple += (1, 2, 3)`` and ``some_int += 1`` create
new objects).

In other words:

* If we have a mutable object (:class:`list`, :class:`dict`, :class:`set`,
  etc.), we can use some specific operations to mutate it and all the variables
  that refer to it will see the change.
* If we have an immutable object (:class:`str`, :class:`int`, :class:`tuple`,
  etc.), all the variables that refer to it will always see the same value,
  but operations that transform that value into a new value always return a new
  object.

If you want to know if two variables refer to the same object or not, you can
use the :keyword:`is` operator, or the built-in function :func:`id`.


How do I write a function with output parameters (call by reference)?
---------------------------------------------------------------------

Remember that arguments are passed by assignment in Python.  Since assignment
just creates references to objects, there's no alias between an argument name in
the caller and callee, and so no call-by-reference per se.  You can achieve the
desired effect in a number of ways.

1) By returning a tuple of the results::

      >>> def func1(a, b):
      ...     a = 'new-value'        # a and b are local names
      ...     b = b + 1              # assigned to new objects
      ...     return a, b            # return new values
      ...
      >>> x, y = 'old-value', 99
      >>> func1(x, y)
      ('new-value', 100)

   This is almost always the clearest solution.

2) By using global variables.  This isn't thread-safe, and is not recommended.

3) By passing a mutable (changeable in-place) object::

      >>> def func2(a):
      ...     a[0] = 'new-value'     # 'a' references a mutable list
      ...     a[1] = a[1] + 1        # changes a shared object
      ...
      >>> args = ['old-value', 99]
      >>> func2(args)
      >>> args
      ['new-value', 100]

4) By passing in a dictionary that gets mutated::

      >>> def func3(args):
      ...     args['a'] = 'new-value'     # args is a mutable dictionary
      ...     args['b'] = args['b'] + 1   # change it in-place
      ...
      >>> args = {'a': 'old-value', 'b': 99}
      >>> func3(args)
      >>> args
      {'a': 'new-value', 'b': 100}

5) Or bundle up values in a class instance::

      >>> class Namespace:
      ...     def __init__(self, /, **args):
      ...         for key, value in args.items():
      ...             setattr(self, key, value)
      ...
      >>> def func4(args):
      ...     args.a = 'new-value'        # args is a mutable Namespace
      ...     args.b = args.b + 1         # change object in-place
      ...
      >>> args = Namespace(a='old-value', b=99)
      >>> func4(args)
      >>> vars(args)
      {'a': 'new-value', 'b': 100}


   There's almost never a good reason to get this complicated.

Your best choice is to return a tuple containing the multiple results.


How do you make a higher order function in Python?
--------------------------------------------------

You have two choices: you can use nested scopes or you can use callable objects.
For example, suppose you wanted to define ``linear(a,b)`` which returns a
function ``f(x)`` that computes the value ``a*x+b``.  Using nested scopes::

   def linear(a, b):
       def result(x):
           return a * x + b
       return result

Or using a callable object::

   class linear:

       def __init__(self, a, b):
           self.a, self.b = a, b

       def __call__(self, x):
           return self.a * x + self.b

In both cases, ::

   taxes = linear(0.3, 2)

gives a callable object where ``taxes(10e6) == 0.3 * 10e6 + 2``.

The callable object approach has the disadvantage that it is a bit slower and
results in slightly longer code.  However, note that a collection of callables
can share their signature via inheritance::

   class exponential(linear):
       # __init__ inherited
       def __call__(self, x):
           return self.a * (x ** self.b)

Object can encapsulate state for several methods::

   class counter:

       value = 0

       def set(self, x):
           self.value = x

       def up(self):
           self.value = self.value + 1

       def down(self):
           self.value = self.value - 1

   count = counter()
   inc, dec, reset = count.up, count.down, count.set

Here ``inc()``, ``dec()`` and ``reset()`` act like functions which share the
same counting variable.


How do I copy an object in Python?
----------------------------------

In general, try :func:`copy.copy` or :func:`copy.deepcopy` for the general case.
Not all objects can be copied, but most can.

Some objects can be copied more easily.  Dictionaries have a :meth:`~dict.copy`
method::

   newdict = olddict.copy()

Sequences can be copied by slicing::

   new_l = l[:]


How can I find the methods or attributes of an object?
------------------------------------------------------

For an instance x of a user-defined class, ``dir(x)`` returns an alphabetized
list of the names containing the instance attributes and methods and attributes
defined by its class.


How can my code discover the name of an object?
-----------------------------------------------

Generally speaking, it can't, because objects don't really have names.
Essentially, assignment always binds a name to a value; the same is true of
``def`` and ``class`` statements, but in that case the value is a
callable. Consider the following code::

   >>> class A:
   ...     pass
   ...
   >>> B = A
   >>> a = B()
   >>> b = a
   >>> print(b)
   <__main__.A object at 0x16D07CC>
   >>> print(a)
   <__main__.A object at 0x16D07CC>

Arguably the class has a name: even though it is bound to two names and invoked
through the name B the created instance is still reported as an instance of
class A.  However, it is impossible to say whether the instance's name is a or
b, since both names are bound to the same value.

Generally speaking it should not be necessary for your code to "know the names"
of particular values. Unless you are deliberately writing introspective
programs, this is usually an indication that a change of approach might be
beneficial.

In comp.lang.python, Fredrik Lundh once gave an excellent analogy in answer to
this question:

   The same way as you get the name of that cat you found on your porch: the cat
   (object) itself cannot tell you its name, and it doesn't really care -- so
   the only way to find out what it's called is to ask all your neighbours
   (namespaces) if it's their cat (object)...

   ....and don't be surprised if you'll find that it's known by many names, or
   no name at all!


What's up with the comma operator's precedence?
-----------------------------------------------

Comma is not an operator in Python.  Consider this session::

    >>> "a" in "b", "a"
    (False, 'a')

Since the comma is not an operator, but a separator between expressions the
above is evaluated as if you had entered::

    ("a" in "b"), "a"

not::

    "a" in ("b", "a")

The same is true of the various assignment operators (``=``, ``+=`` etc).  They
are not truly operators but syntactic delimiters in assignment statements.


Is there an equivalent of C's "?:" ternary operator?
----------------------------------------------------

Yes, there is. The syntax is as follows::

   [on_true] if [expression] else [on_false]

   x, y = 50, 25
   small = x if x < y else y

Before this syntax was introduced in Python 2.5, a common idiom was to use
logical operators::

   [expression] and [on_true] or [on_false]

However, this idiom is unsafe, as it can give wrong results when *on_true*
has a false boolean value.  Therefore, it is always better to use
the ``... if ... else ...`` form.


Is it possible to write obfuscated one-liners in Python?
--------------------------------------------------------

Yes.  Usually this is done by nesting :keyword:`lambda` within
:keyword:`!lambda`.  See the following three examples, due to Ulf Bartelt::

   from functools import reduce

   # Primes < 1000
   print(list(filter(None,map(lambda y:y*reduce(lambda x,y:x*y!=0,
   map(lambda x,y=y:y%x,range(2,int(pow(y,0.5)+1))),1),range(2,1000)))))

   # First 10 Fibonacci numbers
   print(list(map(lambda x,f=lambda x,f:(f(x-1,f)+f(x-2,f)) if x>1 else 1:
   f(x,f), range(10))))

   # Mandelbrot set
   print((lambda Ru,Ro,Iu,Io,IM,Sx,Sy:reduce(lambda x,y:x+y,map(lambda y,
   Iu=Iu,Io=Io,Ru=Ru,Ro=Ro,Sy=Sy,L=lambda yc,Iu=Iu,Io=Io,Ru=Ru,Ro=Ro,i=IM,
   Sx=Sx,Sy=Sy:reduce(lambda x,y:x+y,map(lambda x,xc=Ru,yc=yc,Ru=Ru,Ro=Ro,
   i=i,Sx=Sx,F=lambda xc,yc,x,y,k,f=lambda xc,yc,x,y,k,f:(k<=0)or (x*x+y*y
   >=4.0) or 1+f(xc,yc,x*x-y*y+xc,2.0*x*y+yc,k-1,f):f(xc,yc,x,y,k,f):chr(
   64+F(Ru+x*(Ro-Ru)/Sx,yc,0,0,i)),range(Sx))):L(Iu+y*(Io-Iu)/Sy),range(Sy
   ))))(-2.1, 0.7, -1.2, 1.2, 30, 80, 24))
   #    \___ ___/  \___ ___/  |   |   |__ lines on screen
   #        V          V      |   |______ columns on screen
   #        |          |      |__________ maximum of "iterations"
   #        |          |_________________ range on y axis
   #        |____________________________ range on x axis

Don't try this at home, kids!


.. _faq-positional-only-arguments:

What does the slash(/) in the parameter list of a function mean?
----------------------------------------------------------------

A slash in the argument list of a function denotes that the parameters prior to
it are positional-only.  Positional-only parameters are the ones without an
externally-usable name.  Upon calling a function that accepts positional-only
parameters, arguments are mapped to parameters based solely on their position.
For example, :func:`divmod` is a function that accepts positional-only
parameters. Its documentation looks like this::

   >>> help(divmod)
   Help on built-in function divmod in module builtins:

   divmod(x, y, /)
       Return the tuple (x//y, x%y).  Invariant: div*y + mod == x.

The slash at the end of the parameter list means that both parameters are
positional-only. Thus, calling :func:`divmod` with keyword arguments would lead
to an error::

   >>> divmod(x=3, y=4)
   Traceback (most recent call last):
     File "<stdin>", line 1, in <module>
   TypeError: divmod() takes no keyword arguments


Numbers and strings
===================

How do I specify hexadecimal and octal integers?
------------------------------------------------

To specify an octal digit, precede the octal value with a zero, and then a lower
or uppercase "o".  For example, to set the variable "a" to the octal value "10"
(8 in decimal), type::

   >>> a = 0o10
   >>> a
   8

Hexadecimal is just as easy.  Simply precede the hexadecimal number with a zero,
and then a lower or uppercase "x".  Hexadecimal digits can be specified in lower
or uppercase.  For example, in the Python interpreter::

   >>> a = 0xa5
   >>> a
   165
   >>> b = 0XB2
   >>> b
   178


Why does -22 // 10 return -3?
-----------------------------

It's primarily driven by the desire that ``i % j`` have the same sign as ``j``.
If you want that, and also want::

    i == (i // j) * j + (i % j)

then integer division has to return the floor.  C also requires that identity to
hold, and then compilers that truncate ``i // j`` need to make ``i % j`` have
the same sign as ``i``.

There are few real use cases for ``i % j`` when ``j`` is negative.  When ``j``
is positive, there are many, and in virtually all of them it's more useful for
``i % j`` to be ``>= 0``.  If the clock says 10 now, what did it say 200 hours
ago?  ``-190 % 12 == 2`` is useful; ``-190 % 12 == -10`` is a bug waiting to
bite.


How do I get int literal attribute instead of SyntaxError?
----------------------------------------------------------

Trying to lookup an ``int`` literal attribute in the normal manner gives
a syntax error because the period is seen as a decimal point::

   >>> 1.__class__
     File "<stdin>", line 1
     1.__class__
      ^
   SyntaxError: invalid decimal literal

The solution is to separate the literal from the period
with either a space or parentheses.

   >>> 1 .__class__
   <class 'int'>
   >>> (1).__class__
   <class 'int'>


How do I convert a string to a number?
--------------------------------------

For integers, use the built-in :func:`int` type constructor, e.g. ``int('144')
== 144``.  Similarly, :func:`float` converts to floating-point,
e.g. ``float('144') == 144.0``.

By default, these interpret the number as decimal, so that ``int('0144') ==
144`` holds true, and ``int('0x144')`` raises :exc:`ValueError`. ``int(string,
base)`` takes the base to convert from as a second optional argument, so ``int(
'0x144', 16) == 324``.  If the base is specified as 0, the number is interpreted
using Python's rules: a leading '0o' indicates octal, and '0x' indicates a hex
number.

Do not use the built-in function :func:`eval` if all you need is to convert
strings to numbers.  :func:`eval` will be significantly slower and it presents a
security risk: someone could pass you a Python expression that might have
unwanted side effects.  For example, someone could pass
``__import__('os').system("rm -rf $HOME")`` which would erase your home
directory.

:func:`eval` also has the effect of interpreting numbers as Python expressions,
so that e.g. ``eval('09')`` gives a syntax error because Python does not allow
leading '0' in a decimal number (except '0').


How do I convert a number to a string?
--------------------------------------

To convert, e.g., the number 144 to the string '144', use the built-in type
constructor :func:`str`.  If you want a hexadecimal or octal representation, use
the built-in functions :func:`hex` or :func:`oct`.  For fancy formatting, see
the :ref:`f-strings` and :ref:`formatstrings` sections,
e.g. ``"{:04d}".format(144)`` yields
``'0144'`` and ``"{:.3f}".format(1.0/3.0)`` yields ``'0.333'``.


How do I modify a string in place?
----------------------------------

You can't, because strings are immutable.  In most situations, you should
simply construct a new string from the various parts you want to assemble
it from.  However, if you need an object with the ability to modify in-place
unicode data, try using an :class:`io.StringIO` object or the :mod:`array`
module::

   >>> import io
   >>> s = "Hello, world"
   >>> sio = io.StringIO(s)
   >>> sio.getvalue()
   'Hello, world'
   >>> sio.seek(7)
   7
   >>> sio.write("there!")
   6
   >>> sio.getvalue()
   'Hello, there!'

   >>> import array
   >>> a = array.array('u', s)
   >>> print(a)
   array('u', 'Hello, world')
   >>> a[0] = 'y'
   >>> print(a)
   array('u', 'yello, world')
   >>> a.tounicode()
   'yello, world'


How do I use strings to call functions/methods?
-----------------------------------------------

There are various techniques.

* The best is to use a dictionary that maps strings to functions.  The primary
  advantage of this technique is that the strings do not need to match the names
  of the functions.  This is also the primary technique used to emulate a case
  construct::

     def a():
         pass

     def b():
         pass

     dispatch = {'go': a, 'stop': b}  # Note lack of parens for funcs

     dispatch[get_input()]()  # Note trailing parens to call function

* Use the built-in function :func:`getattr`::

     import foo
     getattr(foo, 'bar')()

  Note that :func:`getattr` works on any object, including classes, class
  instances, modules, and so on.

  This is used in several places in the standard library, like this::

     class Foo:
         def do_foo(self):
             ...

         def do_bar(self):
             ...

     f = getattr(foo_instance, 'do_' + opname)
     f()


* Use :func:`locals` to resolve the function name::

     def myFunc():
         print("hello")

     fname = "myFunc"

     f = locals()[fname]
     f()


Is there an equivalent to Perl's chomp() for removing trailing newlines from strings?
-------------------------------------------------------------------------------------

You can use ``S.rstrip("\r\n")`` to remove all occurrences of any line
terminator from the end of the string ``S`` without removing other trailing
whitespace.  If the string ``S`` represents more than one line, with several
empty lines at the end, the line terminators for all the blank lines will
be removed::

   >>> lines = ("line 1 \r\n"
   ...          "\r\n"
   ...          "\r\n")
   >>> lines.rstrip("\n\r")
   'line 1 '

Since this is typically only desired when reading text one line at a time, using
``S.rstrip()`` this way works well.


Is there a scanf() or sscanf() equivalent?
------------------------------------------

Not as such.

For simple input parsing, the easiest approach is usually to split the line into
whitespace-delimited words using the :meth:`~str.split` method of string objects
and then convert decimal strings to numeric values using :func:`int` or
:func:`float`.  ``split()`` supports an optional "sep" parameter which is useful
if the line uses something other than whitespace as a separator.

For more complicated input parsing, regular expressions are more powerful
than C's :c:func:`sscanf` and better suited for the task.


What does 'UnicodeDecodeError' or 'UnicodeEncodeError' error  mean?
-------------------------------------------------------------------

See the :ref:`unicode-howto`.


Performance
===========

My program is too slow. How do I speed it up?
---------------------------------------------

That's a tough one, in general.  First, here are a list of things to
remember before diving further:

* Performance characteristics vary across Python implementations.  This FAQ
  focuses on :term:`CPython`.
* Behaviour can vary across operating systems, especially when talking about
  I/O or multi-threading.
* You should always find the hot spots in your program *before* attempting to
  optimize any code (see the :mod:`profile` module).
* Writing benchmark scripts will allow you to iterate quickly when searching
  for improvements (see the :mod:`timeit` module).
* It is highly recommended to have good code coverage (through unit testing
  or any other technique) before potentially introducing regressions hidden
  in sophisticated optimizations.

That being said, there are many tricks to speed up Python code.  Here are
some general principles which go a long way towards reaching acceptable
performance levels:

* Making your algorithms faster (or changing to faster ones) can yield
  much larger benefits than trying to sprinkle micro-optimization tricks
  all over your code.

* Use the right data structures.  Study documentation for the :ref:`bltin-types`
  and the :mod:`collections` module.

* When the standard library provides a primitive for doing something, it is
  likely (although not guaranteed) to be faster than any alternative you
  may come up with.  This is doubly true for primitives written in C, such
  as builtins and some extension types.  For example, be sure to use
  either the :meth:`list.sort` built-in method or the related :func:`sorted`
  function to do sorting (and see the :ref:`sortinghowto` for examples
  of moderately advanced usage).

* Abstractions tend to create indirections and force the interpreter to work
  more.  If the levels of indirection outweigh the amount of useful work
  done, your program will be slower.  You should avoid excessive abstraction,
  especially under the form of tiny functions or methods (which are also often
  detrimental to readability).

If you have reached the limit of what pure Python can allow, there are tools
to take you further away.  For example, `Cython <http://cython.org>`_ can
compile a slightly modified version of Python code into a C extension, and
can be used on many different platforms.  Cython can take advantage of
compilation (and optional type annotations) to make your code significantly
faster than when interpreted.  If you are confident in your C programming
skills, you can also :ref:`write a C extension module <extending-index>`
yourself.

.. seealso::
   The wiki page devoted to `performance tips
   <https://wiki.python.org/moin/PythonSpeed/PerformanceTips>`_.

.. _efficient_string_concatenation:

What is the most efficient way to concatenate many strings together?
--------------------------------------------------------------------

:class:`str` and :class:`bytes` objects are immutable, therefore concatenating
many strings together is inefficient as each concatenation creates a new
object.  In the general case, the total runtime cost is quadratic in the
total string length.

To accumulate many :class:`str` objects, the recommended idiom is to place
them into a list and call :meth:`str.join` at the end::

   chunks = []
   for s in my_strings:
       chunks.append(s)
   result = ''.join(chunks)

(another reasonably efficient idiom is to use :class:`io.StringIO`)

To accumulate many :class:`bytes` objects, the recommended idiom is to extend
a :class:`bytearray` object using in-place concatenation (the ``+=`` operator)::

   result = bytearray()
   for b in my_bytes_objects:
       result += b


Sequences (Tuples/Lists)
========================

How do I convert between tuples and lists?
------------------------------------------

The type constructor ``tuple(seq)`` converts any sequence (actually, any
iterable) into a tuple with the same items in the same order.

For example, ``tuple([1, 2, 3])`` yields ``(1, 2, 3)`` and ``tuple('abc')``
yields ``('a', 'b', 'c')``.  If the argument is a tuple, it does not make a copy
but returns the same object, so it is cheap to call :func:`tuple` when you
aren't sure that an object is already a tuple.

The type constructor ``list(seq)`` converts any sequence or iterable into a list
with the same items in the same order.  For example, ``list((1, 2, 3))`` yields
``[1, 2, 3]`` and ``list('abc')`` yields ``['a', 'b', 'c']``.  If the argument
is a list, it makes a copy just like ``seq[:]`` would.


What's a negative index?
------------------------

Python sequences are indexed with positive numbers and negative numbers.  For
positive numbers 0 is the first index 1 is the second index and so forth.  For
negative indices -1 is the last index and -2 is the penultimate (next to last)
index and so forth.  Think of ``seq[-n]`` as the same as ``seq[len(seq)-n]``.

Using negative indices can be very convenient.  For example ``S[:-1]`` is all of
the string except for its last character, which is useful for removing the
trailing newline from a string.


How do I iterate over a sequence in reverse order?
--------------------------------------------------

Use the :func:`reversed` built-in function::

   for x in reversed(sequence):
       ...  # do something with x ...

This won't touch your original sequence, but build a new copy with reversed
order to iterate over.


How do you remove duplicates from a list?
-----------------------------------------

See the Python Cookbook for a long discussion of many ways to do this:

   https://code.activestate.com/recipes/52560/

If you don't mind reordering the list, sort it and then scan from the end of the
list, deleting duplicates as you go::

   if mylist:
       mylist.sort()
       last = mylist[-1]
       for i in range(len(mylist)-2, -1, -1):
           if last == mylist[i]:
               del mylist[i]
           else:
               last = mylist[i]

If all elements of the list may be used as set keys (i.e. they are all
:term:`hashable`) this is often faster ::

   mylist = list(set(mylist))

This converts the list into a set, thereby removing duplicates, and then back
into a list.


How do you remove multiple items from a list
--------------------------------------------

As with removing duplicates, explicitly iterating in reverse with a
delete condition is one possibility.  However, it is easier and faster
to use slice replacement with an implicit or explicit forward iteration.
Here are three variations.::

   mylist[:] = filter(keep_function, mylist)
   mylist[:] = (x for x in mylist if keep_condition)
   mylist[:] = [x for x in mylist if keep_condition]

The list comprehension may be fastest.


How do you make an array in Python?
-----------------------------------

Use a list::

   ["this", 1, "is", "an", "array"]

Lists are equivalent to C or Pascal arrays in their time complexity; the primary
difference is that a Python list can contain objects of many different types.

The ``array`` module also provides methods for creating arrays of fixed types
with compact representations, but they are slower to index than lists.  Also
note that NumPy and other third party packages define array-like structures with
various characteristics as well.

To get Lisp-style linked lists, you can emulate cons cells using tuples::

   lisp_list = ("like",  ("this",  ("example", None) ) )

If mutability is desired, you could use lists instead of tuples.  Here the
analogue of lisp car is ``lisp_list[0]`` and the analogue of cdr is
``lisp_list[1]``.  Only do this if you're sure you really need to, because it's
usually a lot slower than using Python lists.


.. _faq-multidimensional-list:

How do I create a multidimensional list?
----------------------------------------

You probably tried to make a multidimensional array like this::

   >>> A = [[None] * 2] * 3

This looks correct if you print it:

.. testsetup::

   A = [[None] * 2] * 3

.. doctest::

   >>> A
   [[None, None], [None, None], [None, None]]

But when you assign a value, it shows up in multiple places:

.. testsetup::

   A = [[None] * 2] * 3

.. doctest::

   >>> A[0][0] = 5
   >>> A
   [[5, None], [5, None], [5, None]]

The reason is that replicating a list with ``*`` doesn't create copies, it only
creates references to the existing objects.  The ``*3`` creates a list
containing 3 references to the same list of length two.  Changes to one row will
show in all rows, which is almost certainly not what you want.

The suggested approach is to create a list of the desired length first and then
fill in each element with a newly created list::

   A = [None] * 3
   for i in range(3):
       A[i] = [None] * 2

This generates a list containing 3 different lists of length two.  You can also
use a list comprehension::

   w, h = 2, 3
   A = [[None] * w for i in range(h)]

Or, you can use an extension that provides a matrix datatype; `NumPy
<http://www.numpy.org/>`_ is the best known.


How do I apply a method to a sequence of objects?
-------------------------------------------------

Use a list comprehension::

   result = [obj.method() for obj in mylist]

.. _faq-augmented-assignment-tuple-error:

Why does a_tuple[i] += ['item'] raise an exception when the addition works?
---------------------------------------------------------------------------

This is because of a combination of the fact that augmented assignment
operators are *assignment* operators, and the difference between mutable and
immutable objects in Python.

This discussion applies in general when augmented assignment operators are
applied to elements of a tuple that point to mutable objects, but we'll use
a ``list`` and ``+=`` as our exemplar.

If you wrote::

   >>> a_tuple = (1, 2)
   >>> a_tuple[0] += 1
   Traceback (most recent call last):
      ...
   TypeError: 'tuple' object does not support item assignment

The reason for the exception should be immediately clear: ``1`` is added to the
object ``a_tuple[0]`` points to (``1``), producing the result object, ``2``,
but when we attempt to assign the result of the computation, ``2``, to element
``0`` of the tuple, we get an error because we can't change what an element of
a tuple points to.

Under the covers, what this augmented assignment statement is doing is
approximately this::

   >>> result = a_tuple[0] + 1
   >>> a_tuple[0] = result
   Traceback (most recent call last):
     ...
   TypeError: 'tuple' object does not support item assignment

It is the assignment part of the operation that produces the error, since a
tuple is immutable.

When you write something like::

   >>> a_tuple = (['foo'], 'bar')
   >>> a_tuple[0] += ['item']
   Traceback (most recent call last):
     ...
   TypeError: 'tuple' object does not support item assignment

The exception is a bit more surprising, and even more surprising is the fact
that even though there was an error, the append worked::

    >>> a_tuple[0]
    ['foo', 'item']

To see why this happens, you need to know that (a) if an object implements an
``__iadd__`` magic method, it gets called when the ``+=`` augmented assignment
is executed, and its return value is what gets used in the assignment statement;
and (b) for lists, ``__iadd__`` is equivalent to calling ``extend`` on the list
and returning the list.  That's why we say that for lists, ``+=`` is a
"shorthand" for ``list.extend``::

    >>> a_list = []
    >>> a_list += [1]
    >>> a_list
    [1]

This is equivalent to::

    >>> result = a_list.__iadd__([1])
    >>> a_list = result

The object pointed to by a_list has been mutated, and the pointer to the
mutated object is assigned back to ``a_list``.  The end result of the
assignment is a no-op, since it is a pointer to the same object that ``a_list``
was previously pointing to, but the assignment still happens.

Thus, in our tuple example what is happening is equivalent to::

   >>> result = a_tuple[0].__iadd__(['item'])
   >>> a_tuple[0] = result
   Traceback (most recent call last):
     ...
   TypeError: 'tuple' object does not support item assignment

The ``__iadd__`` succeeds, and thus the list is extended, but even though
``result`` points to the same object that ``a_tuple[0]`` already points to,
that final assignment still results in an error, because tuples are immutable.


I want to do a complicated sort: can you do a Schwartzian Transform in Python?
------------------------------------------------------------------------------

The technique, attributed to Randal Schwartz of the Perl community, sorts the
elements of a list by a metric which maps each element to its "sort value". In
Python, use the ``key`` argument for the :meth:`list.sort` method::

   Isorted = L[:]
   Isorted.sort(key=lambda s: int(s[10:15]))


How can I sort one list by values from another list?
----------------------------------------------------

Merge them into an iterator of tuples, sort the resulting list, and then pick
out the element you want. ::

   >>> list1 = ["what", "I'm", "sorting", "by"]
   >>> list2 = ["something", "else", "to", "sort"]
   >>> pairs = zip(list1, list2)
   >>> pairs = sorted(pairs)
   >>> pairs
   [("I'm", 'else'), ('by', 'sort'), ('sorting', 'to'), ('what', 'something')]
   >>> result = [x[1] for x in pairs]
   >>> result
   ['else', 'sort', 'to', 'something']


Objects
=======

What is a class?
----------------

A class is the particular object type created by executing a class statement.
Class objects are used as templates to create instance objects, which embody
both the data (attributes) and code (methods) specific to a datatype.

A class can be based on one or more other classes, called its base class(es). It
then inherits the attributes and methods of its base classes. This allows an
object model to be successively refined by inheritance.  You might have a
generic ``Mailbox`` class that provides basic accessor methods for a mailbox,
and subclasses such as ``MboxMailbox``, ``MaildirMailbox``, ``OutlookMailbox``
that handle various specific mailbox formats.


What is a method?
-----------------

A method is a function on some object ``x`` that you normally call as
``x.name(arguments...)``.  Methods are defined as functions inside the class
definition::

   class C:
       def meth(self, arg):
           return arg * 2 + self.attribute


What is self?
-------------

Self is merely a conventional name for the first argument of a method.  A method
defined as ``meth(self, a, b, c)`` should be called as ``x.meth(a, b, c)`` for
some instance ``x`` of the class in which the definition occurs; the called
method will think it is called as ``meth(x, a, b, c)``.

See also :ref:`why-self`.


How do I check if an object is an instance of a given class or of a subclass of it?
-----------------------------------------------------------------------------------

Use the built-in function ``isinstance(obj, cls)``.  You can check if an object
is an instance of any of a number of classes by providing a tuple instead of a
single class, e.g. ``isinstance(obj, (class1, class2, ...))``, and can also
check whether an object is one of Python's built-in types, e.g.
``isinstance(obj, str)`` or ``isinstance(obj, (int, float, complex))``.

Note that :func:`isinstance` also checks for virtual inheritance from an
:term:`abstract base class`.  So, the test will return ``True`` for a
registered class even if hasn't directly or indirectly inherited from it.  To
test for "true inheritance", scan the :term:`MRO` of the class:

.. testcode::

    from collections.abc import Mapping

    class P:
         pass

    class C(P):
        pass

    Mapping.register(P)

.. doctest::

    >>> c = C()
    >>> isinstance(c, C)        # direct
    True
    >>> isinstance(c, P)        # indirect
    True
    >>> isinstance(c, Mapping)  # virtual
    True

    # Actual inheritance chain
    >>> type(c).__mro__
    (<class 'C'>, <class 'P'>, <class 'object'>)

    # Test for "true inheritance"
    >>> Mapping in type(c).__mro__
    False

Note that most programs do not use :func:`isinstance` on user-defined classes
very often.  If you are developing the classes yourself, a more proper
object-oriented style is to define methods on the classes that encapsulate a
particular behaviour, instead of checking the object's class and doing a
different thing based on what class it is.  For example, if you have a function
that does something::

   def search(obj):
       if isinstance(obj, Mailbox):
           ...  # code to search a mailbox
       elif isinstance(obj, Document):
           ...  # code to search a document
       elif ...

A better approach is to define a ``search()`` method on all the classes and just
call it::

   class Mailbox:
       def search(self):
           ...  # code to search a mailbox

   class Document:
       def search(self):
           ...  # code to search a document

   obj.search()


What is delegation?
-------------------

Delegation is an object oriented technique (also called a design pattern).
Let's say you have an object ``x`` and want to change the behaviour of just one
of its methods.  You can create a new class that provides a new implementation
of the method you're interested in changing and delegates all other methods to
the corresponding method of ``x``.

Python programmers can easily implement delegation.  For example, the following
class implements a class that behaves like a file but converts all written data
to uppercase::

   class UpperOut:

       def __init__(self, outfile):
           self._outfile = outfile

       def write(self, s):
           self._outfile.write(s.upper())

       def __getattr__(self, name):
           return getattr(self._outfile, name)

Here the ``UpperOut`` class redefines the ``write()`` method to convert the
argument string to uppercase before calling the underlying
``self._outfile.write()`` method.  All other methods are delegated to the
underlying ``self._outfile`` object.  The delegation is accomplished via the
``__getattr__`` method; consult :ref:`the language reference <attribute-access>`
for more information about controlling attribute access.

Note that for more general cases delegation can get trickier. When attributes
must be set as well as retrieved, the class must define a :meth:`__setattr__`
method too, and it must do so carefully.  The basic implementation of
:meth:`__setattr__` is roughly equivalent to the following::

   class X:
       ...
       def __setattr__(self, name, value):
           self.__dict__[name] = value
       ...

Most :meth:`__setattr__` implementations must modify ``self.__dict__`` to store
local state for self without causing an infinite recursion.


How do I call a method defined in a base class from a derived class that extends it?
------------------------------------------------------------------------------------

Use the built-in :func:`super` function::

   class Derived(Base):
       def meth(self):
           super().meth()  # calls Base.meth

In the example, :func:`super` will automatically determine the instance from
which it was called (the ``self`` value), look up the :term:`method resolution
order` (MRO) with ``type(self).__mro__``, and return the next in line after
``Derived`` in the MRO: ``Base``.


How can I organize my code to make it easier to change the base class?
----------------------------------------------------------------------

You could assign the base class to an alias and derive from the alias.  Then all
you have to change is the value assigned to the alias.  Incidentally, this trick
is also handy if you want to decide dynamically (e.g. depending on availability
of resources) which base class to use.  Example::

   class Base:
       ...

   BaseAlias = Base

   class Derived(BaseAlias):
       ...


How do I create static class data and static class methods?
-----------------------------------------------------------

Both static data and static methods (in the sense of C++ or Java) are supported
in Python.

For static data, simply define a class attribute.  To assign a new value to the
attribute, you have to explicitly use the class name in the assignment::

   class C:
       count = 0   # number of times C.__init__ called

       def __init__(self):
           C.count = C.count + 1

       def getcount(self):
           return C.count  # or return self.count

``c.count`` also refers to ``C.count`` for any ``c`` such that ``isinstance(c,
C)`` holds, unless overridden by ``c`` itself or by some class on the base-class
search path from ``c.__class__`` back to ``C``.

Caution: within a method of C, an assignment like ``self.count = 42`` creates a
new and unrelated instance named "count" in ``self``'s own dict.  Rebinding of a
class-static data name must always specify the class whether inside a method or
not::

   C.count = 314

Static methods are possible::

   class C:
       @staticmethod
       def static(arg1, arg2, arg3):
           # No 'self' parameter!
           ...

However, a far more straightforward way to get the effect of a static method is
via a simple module-level function::

   def getcount():
       return C.count

If your code is structured so as to define one class (or tightly related class
hierarchy) per module, this supplies the desired encapsulation.


How can I overload constructors (or methods) in Python?
-------------------------------------------------------

This answer actually applies to all methods, but the question usually comes up
first in the context of constructors.

In C++ you'd write

.. code-block:: c

    class C {
        C() { cout << "No arguments\n"; }
        C(int i) { cout << "Argument is " << i << "\n"; }
    }

In Python you have to write a single constructor that catches all cases using
default arguments.  For example::

   class C:
       def __init__(self, i=None):
           if i is None:
               print("No arguments")
           else:
               print("Argument is", i)

This is not entirely equivalent, but close enough in practice.

You could also try a variable-length argument list, e.g. ::

   def __init__(self, *args):
       ...

The same approach works for all method definitions.


I try to use __spam and I get an error about _SomeClassName__spam.
------------------------------------------------------------------

Variable names with double leading underscores are "mangled" to provide a simple
but effective way to define class private variables.  Any identifier of the form
``__spam`` (at least two leading underscores, at most one trailing underscore)
is textually replaced with ``_classname__spam``, where ``classname`` is the
current class name with any leading underscores stripped.

This doesn't guarantee privacy: an outside user can still deliberately access
the "_classname__spam" attribute, and private values are visible in the object's
``__dict__``.  Many Python programmers never bother to use private variable
names at all.


My class defines __del__ but it is not called when I delete the object.
-----------------------------------------------------------------------

There are several possible reasons for this.

The del statement does not necessarily call :meth:`__del__` -- it simply
decrements the object's reference count, and if this reaches zero
:meth:`__del__` is called.

If your data structures contain circular links (e.g. a tree where each child has
a parent reference and each parent has a list of children) the reference counts
will never go back to zero.  Once in a while Python runs an algorithm to detect
such cycles, but the garbage collector might run some time after the last
reference to your data structure vanishes, so your :meth:`__del__` method may be
called at an inconvenient and random time. This is inconvenient if you're trying
to reproduce a problem. Worse, the order in which object's :meth:`__del__`
methods are executed is arbitrary.  You can run :func:`gc.collect` to force a
collection, but there *are* pathological cases where objects will never be
collected.

Despite the cycle collector, it's still a good idea to define an explicit
``close()`` method on objects to be called whenever you're done with them.  The
``close()`` method can then remove attributes that refer to subobjects.  Don't
call :meth:`__del__` directly -- :meth:`__del__` should call ``close()`` and
``close()`` should make sure that it can be called more than once for the same
object.

Another way to avoid cyclical references is to use the :mod:`weakref` module,
which allows you to point to objects without incrementing their reference count.
Tree data structures, for instance, should use weak references for their parent
and sibling references (if they need them!).

.. XXX relevant for Python 3?

   If the object has ever been a local variable in a function that caught an
   expression in an except clause, chances are that a reference to the object
   still exists in that function's stack frame as contained in the stack trace.
   Normally, calling :func:`sys.exc_clear` will take care of this by clearing
   the last recorded exception.

Finally, if your :meth:`__del__` method raises an exception, a warning message
is printed to :data:`sys.stderr`.


How do I get a list of all instances of a given class?
------------------------------------------------------

Python does not keep track of all instances of a class (or of a built-in type).
You can program the class's constructor to keep track of all instances by
keeping a list of weak references to each instance.


Why does the result of ``id()`` appear to be not unique?
--------------------------------------------------------

The :func:`id` builtin returns an integer that is guaranteed to be unique during
the lifetime of the object.  Since in CPython, this is the object's memory
address, it happens frequently that after an object is deleted from memory, the
next freshly created object is allocated at the same position in memory.  This
is illustrated by this example:

>>> id(1000) # doctest: +SKIP
13901272
>>> id(2000) # doctest: +SKIP
13901272

The two ids belong to different integer objects that are created before, and
deleted immediately after execution of the ``id()`` call.  To be sure that
objects whose id you want to examine are still alive, create another reference
to the object:

>>> a = 1000; b = 2000
>>> id(a) # doctest: +SKIP
13901272
>>> id(b) # doctest: +SKIP
13891296


When can I rely on identity tests with the *is* operator?
---------------------------------------------------------

The ``is`` operator tests for object identity.  The test ``a is b`` is
equivalent to ``id(a) == id(b)``.

The most important property of an identity test is that an object is always
identical to itself, ``a is a`` always returns ``True``.  Identity tests are
usually faster than equality tests.  And unlike equality tests, identity tests
are guaranteed to return a boolean ``True`` or ``False``.

However, identity tests can *only* be substituted for equality tests when
object identity is assured.  Generally, there are three circumstances where
identity is guaranteed:

1) Assignments create new names but do not change object identity.  After the
assignment ``new = old``, it is guaranteed that ``new is old``.

2) Putting an object in a container that stores object references does not
change object identity.  After the list assignment ``s[0] = x``, it is
guaranteed that ``s[0] is x``.

3) If an object is a singleton, it means that only one instance of that object
can exist.  After the assignments ``a = None`` and ``b = None``, it is
guaranteed that ``a is b`` because ``None`` is a singleton.

In most other circumstances, identity tests are inadvisable and equality tests
are preferred.  In particular, identity tests should not be used to check
constants such as :class:`int` and :class:`str` which aren't guaranteed to be
singletons::

    >>> a = 1000
    >>> b = 500
    >>> c = b + 500
    >>> a is c
    False

    >>> a = 'Python'
    >>> b = 'Py'
    >>> c = b + 'thon'
    >>> a is c
    False

Likewise, new instances of mutable containers are never identical::

    >>> a = []
    >>> b = []
    >>> a is b
    False

In the standard library code, you will see several common patterns for
correctly using identity tests:

1) As recommended by :pep:`8`, an identity test is the preferred way to check
for ``None``.  This reads like plain English in code and avoids confusion with
other objects that may have boolean values that evaluate to false.

2) Detecting optional arguments can be tricky when ``None`` is a valid input
value.  In those situations, you can create a singleton sentinel object
guaranteed to be distinct from other objects.  For example, here is how
to implement a method that behaves like :meth:`dict.pop`::

   _sentinel = object()

   def pop(self, key, default=_sentinel):
       if key in self:
           value = self[key]
           del self[key]
           return value
       if default is _sentinel:
           raise KeyError(key)
       return default

3) Container implementations sometimes need to augment equality tests with
identity tests.  This prevents the code from being confused by objects such as
``float('NaN')`` that are not equal to themselves.

For example, here is the implementation of
:meth:`collections.abc.Sequence.__contains__`::

    def __contains__(self, value):
        for v in self:
            if v is value or v == value:
                return True
        return False


How can a subclass control what data is stored in an immutable instance?
------------------------------------------------------------------------

When subclassing an immutable type, override the :meth:`__new__` method
instead of the :meth:`__init__` method.  The latter only runs *after* an
instance is created, which is too late to alter data in an immutable
instance.

All of these immutable classes have a different signature than their
parent class:

.. testcode::

    from datetime import date

    class FirstOfMonthDate(date):
        "Always choose the first day of the month"
        def __new__(cls, year, month, day):
            return super().__new__(cls, year, month, 1)

    class NamedInt(int):
        "Allow text names for some numbers"
        xlat = {'zero': 0, 'one': 1, 'ten': 10}
        def __new__(cls, value):
            value = cls.xlat.get(value, value)
            return super().__new__(cls, value)

    class TitleStr(str):
        "Convert str to name suitable for a URL path"
        def __new__(cls, s):
            s = s.lower().replace(' ', '-')
            s = ''.join([c for c in s if c.isalnum() or c == '-'])
            return super().__new__(cls, s)

The classes can be used like this:

.. doctest::

    >>> FirstOfMonthDate(2012, 2, 14)
    FirstOfMonthDate(2012, 2, 1)
    >>> NamedInt('ten')
    10
    >>> NamedInt(20)
    20
    >>> TitleStr('Blog: Why Python Rocks')
    'blog-why-python-rocks'


How do I cache method calls?
----------------------------

The two principal tools for caching methods are
:func:`functools.cached_property` and :func:`functools.lru_cache`.  The
former stores results at the instance level and the latter at the class
level.

The *cached_property* approach only works with methods that do not take
any arguments.  It does not create a reference to the instance.  The
cached method result will be kept only as long as the instance is alive.

The advantage is that when an instance is not longer used, the cached
method result will be released right away.  The disadvantage is that if
instances accumulate, so too will the accumulated method results.  They
can grow without bound.

The *lru_cache* approach works with methods that have hashable
arguments.  It creates a reference to the instance unless special
efforts are made to pass in weak references.

The advantage of the least recently used algorithm is that the cache is
bounded by the specified *maxsize*.  The disadvantage is that instances
are kept alive until they age out of the cache or until the cache is
cleared.

This example shows the various techniques::

    class Weather:
        "Lookup weather information on a government website"

        def __init__(self, station_id):
            self._station_id = station_id
            # The _station_id is private and immutable

        def current_temperature(self):
            "Latest hourly observation"
            # Do not cache this because old results
            # can be out of date.

        @cached_property
        def location(self):
            "Return the longitude/latitude coordinates of the station"
            # Result only depends on the station_id

        @lru_cache(maxsize=20)
        def historic_rainfall(self, date, units='mm'):
            "Rainfall on a given date"
            # Depends on the station_id, date, and units.

The above example assumes that the *station_id* never changes.  If the
relevant instance attributes are mutable, the *cached_property* approach
can't be made to work because it cannot detect changes to the
attributes.

The *lru_cache* approach can be made to work, but the class needs to define the
*__eq__* and *__hash__* methods so the cache can detect relevant attribute
updates::

    class Weather:
        "Example with a mutable station identifier"

        def __init__(self, station_id):
            self.station_id = station_id

        def change_station(self, station_id):
            self.station_id = station_id

        def __eq__(self, other):
            return self.station_id == other.station_id

        def __hash__(self):
            return hash(self.station_id)

        @lru_cache(maxsize=20)
        def historic_rainfall(self, date, units='cm'):
            'Rainfall on a given date'
            # Depends on the station_id, date, and units.


Modules
=======

How do I create a .pyc file?
----------------------------

When a module is imported for the first time (or when the source file has
changed since the current compiled file was created) a ``.pyc`` file containing
the compiled code should be created in a ``__pycache__`` subdirectory of the
directory containing the ``.py`` file.  The ``.pyc`` file will have a
filename that starts with the same name as the ``.py`` file, and ends with
``.pyc``, with a middle component that depends on the particular ``python``
binary that created it.  (See :pep:`3147` for details.)

One reason that a ``.pyc`` file may not be created is a permissions problem
with the directory containing the source file, meaning that the ``__pycache__``
subdirectory cannot be created. This can happen, for example, if you develop as
one user but run as another, such as if you are testing with a web server.

Unless the :envvar:`PYTHONDONTWRITEBYTECODE` environment variable is set,
creation of a .pyc file is automatic if you're importing a module and Python
has the ability (permissions, free space, etc...) to create a ``__pycache__``
subdirectory and write the compiled module to that subdirectory.

Running Python on a top level script is not considered an import and no
``.pyc`` will be created.  For example, if you have a top-level module
``foo.py`` that imports another module ``xyz.py``, when you run ``foo`` (by
typing ``python foo.py`` as a shell command), a ``.pyc`` will be created for
``xyz`` because ``xyz`` is imported, but no ``.pyc`` file will be created for
``foo`` since ``foo.py`` isn't being imported.

If you need to create a ``.pyc`` file for ``foo`` -- that is, to create a
``.pyc`` file for a module that is not imported -- you can, using the
:mod:`py_compile` and :mod:`compileall` modules.

The :mod:`py_compile` module can manually compile any module.  One way is to use
the ``compile()`` function in that module interactively::

   >>> import py_compile
   >>> py_compile.compile('foo.py')                 # doctest: +SKIP

This will write the ``.pyc`` to a ``__pycache__`` subdirectory in the same
location as ``foo.py`` (or you can override that with the optional parameter
``cfile``).

You can also automatically compile all files in a directory or directories using
the :mod:`compileall` module.  You can do it from the shell prompt by running
``compileall.py`` and providing the path of a directory containing Python files
to compile::

       python -m compileall .


How do I find the current module name?
--------------------------------------

A module can find out its own module name by looking at the predefined global
variable ``__name__``.  If this has the value ``'__main__'``, the program is
running as a script.  Many modules that are usually used by importing them also
provide a command-line interface or a self-test, and only execute this code
after checking ``__name__``::

   def main():
       print('Running test...')
       ...

   if __name__ == '__main__':
       main()


How can I have modules that mutually import each other?
-------------------------------------------------------

Suppose you have the following modules:

:file:`foo.py`::

   from bar import bar_var
   foo_var = 1

:file:`bar.py`::

   from foo import foo_var
   bar_var = 2

The problem is that the interpreter will perform the following steps:

* main imports ``foo``
* Empty globals for ``foo`` are created
* ``foo`` is compiled and starts executing
* ``foo`` imports ``bar``
* Empty globals for ``bar`` are created
* ``bar`` is compiled and starts executing
* ``bar`` imports ``foo`` (which is a no-op since there already is a module named ``foo``)
* The import mechanism tries to read ``foo_var`` from ``foo`` globals, to set ``bar.foo_var = foo.foo_var``

The last step fails, because Python isn't done with interpreting ``foo`` yet and
the global symbol dictionary for ``foo`` is still empty.

The same thing happens when you use ``import foo``, and then try to access
``foo.foo_var`` in global code.

There are (at least) three possible workarounds for this problem.

Guido van Rossum recommends avoiding all uses of ``from <module> import ...``,
and placing all code inside functions.  Initializations of global variables and
class variables should use constants or built-in functions only.  This means
everything from an imported module is referenced as ``<module>.<name>``.

Jim Roskind suggests performing steps in the following order in each module:

* exports (globals, functions, and classes that don't need imported base
  classes)
* ``import`` statements
* active code (including globals that are initialized from imported values).

Van Rossum doesn't like this approach much because the imports appear in a
strange place, but it does work.

Matthias Urlichs recommends restructuring your code so that the recursive import
is not necessary in the first place.

These solutions are not mutually exclusive.


__import__('x.y.z') returns <module 'x'>; how do I get z?
---------------------------------------------------------

Consider using the convenience function :func:`~importlib.import_module` from
:mod:`importlib` instead::

   z = importlib.import_module('x.y.z')


When I edit an imported module and reimport it, the changes don't show up.  Why does this happen?
-------------------------------------------------------------------------------------------------

For reasons of efficiency as well as consistency, Python only reads the module
file on the first time a module is imported.  If it didn't, in a program
consisting of many modules where each one imports the same basic module, the
basic module would be parsed and re-parsed many times.  To force re-reading of a
changed module, do this::

   import importlib
   import modname
   importlib.reload(modname)

Warning: this technique is not 100% fool-proof.  In particular, modules
containing statements like ::

   from modname import some_objects

will continue to work with the old version of the imported objects.  If the
module contains class definitions, existing class instances will *not* be
updated to use the new class definition.  This can result in the following
paradoxical behaviour::

   >>> import importlib
   >>> import cls
   >>> c = cls.C()                # Create an instance of C
   >>> importlib.reload(cls)
   <module 'cls' from 'cls.py'>
   >>> isinstance(c, cls.C)       # isinstance is false?!?
   False

The nature of the problem is made clear if you print out the "identity" of the
class objects::

   >>> hex(id(c.__class__))
   '0x7352a0'
   >>> hex(id(cls.C))
   '0x4198d0'