preface
pytest yes python2 Built in automated testing framework ,python3 In terms of the version of pytest The framework is independent , need pip Install
selection pytest Framework as a framework for automation , It is the result of comparison of several frameworks , It has some comparative advantages
Relative to unittest Several advantages of the framework :
1.fixture Pre function
2. Use case tagging , Use case rerun and other functions .
3. Use case parameterization
4. compatible unittest,nose frame
One of the highlights is the function fixture Function of , It will be introduced separately later fixture function
Official introduction pytest Advantages of framework :
1. It's very easy to get started , Easy to get started , Rich documentation , There are many examples in the document
2. It can support simple unit test and complex function test
3. Support parameterization
4. The execution process can only blame some tests that can be skipped or some tests that are expected to fail case Mark as failed
5. Support repeated execution failure case
6. Support operation by nose,unittest Written tests case
7. Generative html report
8. Convenient and continuous integration tools jenkins integrate
9. Executable use cases are supported
10. There are many third-party plug-ins , And you can customize the extension
A good cat can catch mice , A framework that meets the needs of most departments is a good framework , How to use the framework to help with automated testing , Receiving one wave user manual
1. One , Download and install
(1)python2 Bring your own ,pytest --version Direct view , Or open it through the terminal :import pytest, No mistake , That's what exists , however python2
2020 It won't be maintained after five years , So it's best to replace it as soon as possible python3 Environmental Science
(2)python3,pip install -U pytest
(3) View downloaded pip show pytest perhaps pytest --version
(4)pytest Use case execution rules :
① Test file to test_xx.py start ( or _test ending )
② Test class with Test start , And not with init method
③ Test function or method to test_xx start
tip: Can only be named by this naming rule , Name a file as :test_create_trans, You can't recognize it , It can only be test_xxx.py
Two , Test case design and execution
*
Main highlights
*
@pytest.fixture
*
@pytest.mark.parametrzie
*
@pytest.mark
*
conftest.py file
*
Other test case design features
*
pytest.xfail()
*
pytest.exit()
*
pytest.skip()
*
pytest.skipif()
2.1 Main highlights and functions :
1.fixture function
understand fixture function : be similar to unittest In the frame setup,teardown This way , There is no need to display the call , The framework executes automatically , however fixture Differences
① Call can be displayed , Automatic call can also be set
② Display call ,fixture It is directly passed in as other test case parameters
③ Directly through scope The parameter range determines the scope of the execution
④ The function name of the custom pre operation , Unfixed setup,teardown etc.
fixture grammar :
@pytest.fixture(scope="session",autouse=True)
#scope There are four levels :session,module,class,function,function Is the default level
#autouse Indicates whether the function is called by default for all test cases ,True Indicates auto call login implement ,False Indicates that it is not automatically called
instructions :
@pytest.fixture yes pytest The ornament of the house , be familiar with python The grammar knows the decorator , A function needs to be fixture If the function is used , Register as before function fixture, Refer to the application example code for details
Functions realized :
You can put some general , Methods that need to be executed before test case execution are executed in advance , According to the setting session level , Determine when to execute :
Application examples : Take login as an example : Because basically all functional tests need to use login , So, as a pre execution function , as follows :
#test_01.py # -*- coding:utf-8 -*- import pytest test_login =[{
"username": "18712345678", "passwd": "111111" }]
@pytest.fixture() def login(request): username = request.param['username']
passwd = request.param['passwd'] return
" Login name :{0}, password :{1}".format(username, passwd) @pytest.mark.parametrize("login",
test_login, indirect=True)
# Parameterization login, because login,fixture Only procedures are included in , Not fixed login data , So in test_01 Which user login time is needed , Then the parameterized value can be used for login
def test_01(login): #login Pass as a parameter to test_01 test case print " test case 1:%s"%login
if __name__ == "__main__": pytest.main(['-s', 'test_01.py'])
# But it must be noted : If you want to do it only once ,fixture Functions can only be placed in conftest in , It's just like I put this method in a public domain common File, and then import the call , Then each test case uses fixture Pass in as a parameter , Then each test case is called once login, Do it again login
The specific implementation results are shown in the figure below
2. Use case parameterization function
grammar
@pytest.mark.paramertrize(" Parameter name ", list, indriect),
instructions
# Parameter name : The first parameter requires a string , The string content can be a function , It can be the parameters required by the test case
#list: The corresponding parameterized data should be transferred to a list , If it's a few sets of data , The middle is in the form of tuples , Or nested in the form of fields
#indriect: Default to False, meaning : Verify that the value passed in as the first parameter is a function , If it's a function , You need to pass this parameter as True, Otherwise, it will not be passed on
Functions realized :
The parameterization of test cases can be realized , And the combination of parameters , Refer to the application example
Application examples :
# The first parameterization @pytest.mark.parametrize("param01, param02", [(1, 2),(2, 3)])
@pytest.mark.parametrize("login", test_login, indirect=True) def test_02(login,
param01, param02 ): print " Login successful :%s" % login print
" test data :%s,%s"%(param01, param02) results of enforcement :
# The second parameter combination , take param1,param2, Separate parameterization ,pytest By default, the framework arranges and combines the parameters and passes them to the test case
@pytest.mark.parametrize("param01", [1, 2, 3])
@pytest.mark.parametrize("param02", [4,5]) @pytest.mark.parametrize("login",
test_login, indirect=True) def test_03(login, param01, param02): print
" Login successful :%s" % login print " test data :%s,%s"%(param01, param02)
results of enforcement :
3. Use case tagging function
grammar :@pytest.mark.xxx
instructions :@pytest.mark.xxx in
xxx Is the tag name , Personal definition , For example, an interface use case is web Terminal , An interface use case is APP Terminal , Then you can distinguish between tags , as :@pytest.mark.web
;@pytest.mark.app
Functions realized : Distinguish marked use cases , Connect , If only testing is needed web Terminal , On the command line -m Parameter to specify the use case to be executed :
Namely $pytest -m web test_01.py
that pytest It's just execution web Test cases for , The specific results refer to the application example
Application examples
* @pytest.mark.web @pytest.mark.parametrize("login", test_login,
indirect=True) def test_04(login): print "%s"%login print
" Mark test case as web End use case " @pytest.mark.app @pytest.mark.parametrize("login", test_login,
indirect=True) def test_05(login): print "%s"%login print
" Mark test case as app End use case "
results of enforcement
4.conftest file :
conftest The file is pytest A configuration file that the framework reads by default , So you need to pay attention :
①conftest.py The configuration script name is fixed , The name cannot be modified
②conftest.py The test case must be in the same location as the test case being run package lower , And there should be __init__.py
③ unwanted import Import conftest.py,pytest The examples will be found automatically
④ If you want to achieve session Hierarchical fixture, It must be placed under this file , If the new file is encapsulated fixture, Not possible session Level call
2.2 Introduction to other high quality functions
For the remaining functions , Easy to use and easy to understand
1. Use case mark failed :pytest.xfail(msg)
It means : Under which conditions are not satisfied , You expect it to fail , The test case is marked as failed ,msg It's a string
2. Use case exit :pytest.exit(msg)
meaning : In the process of test case execution , There are some methods that have to be passed , Can be used for other methods , Like logging in , Login failed , Then there's no need to do anything else , Just mark the exit directly
3. Use case skipping : There are three :
@pytest.mark.skip(reason="");
pytest.skip(reason)
pytest.mark.skipif(condition)
# The first kind @pytest.mark.ski It's a decorator , Can act directly on a method or class ,
# The second and third kind of non decorator , Perform a skip in the use case method , The third is to skip when certain conditions are met , The judgment condition is added
The above specific usage can refer to the following additional code
conftest.py file #conftest.py Registered in file fixture def md5(text): """ take MD5 Method encapsulation """
hl = hashlib.md5() hl.update(text.encode("utf-8")) return
hl.hexdigest() @pytest.fixture(scope='session') def login(request): """
Login interface :param request: :return: r['data']
# Namely json in data Dictionaries , The dictionary contains the basic information of the user after successful login """ print
" Login successful :{0}".format(test_login_valid) data = { "username":
request.param['username'], "password": md5(request.param['passwd']),
"device_id": device_id, "app_ver": request.param['app_ver']
} try: r = BasePerformer.post(url, routes['login'], data)
# It's personal post method if request.param['app_ver'] not in right_ver:
assert r['code'] == 1108, " Version too low , Please login after upgrading "
pytest.exit(r['errmsg']) assert r['code'] == 1000
except AssertionError: pytest.exit(r['errmsg']) except
Exception, e: traceback.print_exc(file=sys.stdout)
pytest.exit(" Login exception , Exit use case " + str(e)) else: return r['data']
@pytest.fixture(scope='session') def get_trans(login): """ Waybill summary interface
trans_general :param login: :return: """"" try: r =
BasePerformer.get(url, routes['trans_general'], params={"token":
login['token']}) assert r['code'] == 1000 if not r['data'].
has_key("trans_number"): pytest.exit(" The driver has no waybill , Exit test ") return
r['data'] except AssertionError: pytest.exit(r['errmsg'])
except Exception, e: pytest.xfail(str(e))
test case file :test_truck.py
@pytest.mark.parametrize("login", test_login_valid, indirect=True)
# All test cases under this test file need to be used login Interface , So it's directly before the class , Declaration use fixture function , Before the test class is executed , It will be called directly login class
TestTruck(object): """ Vehicle related interface """ def truck_info(self, login):
""" Access to vehicle information interface """ r = BasePerformer.get(url, routes['get_truck'],
params={'token': login['token']}) try: if not r['data']:
assert r['code'] == 1701 # 1701 Unbound vehicle #
pytest.skip(" Unbound vehicle , To bind the vehicle ") return False assert
r['code'] == 1000 print " The license plate is :%s" % r['data']['plate']
return r['data']['plate'] except Exception, e:
pytest.xfail(" Failed to obtain vehicle information " + str(e))
@pytest.mark.parametrize("plate", plate_operate['plate'])
@pytest.mark.parametrize("bound_type", plate_operate['bound_type']) def
test_bound_plate(self, login, plate, bound_type): """ Test binding vehicle interface """
data = { "token": login['token'], "type":
bound_type, "plate": plate } get_truck_info =
self.truck_info(login) if (not get_truck_info) and bound_type == "1":
print " Currently unbound vehicle , Direct binding %s" % plate_operate elif get_truck_info
== drive_plate: print " The driver's personal vehicle is currently bound "
pytest.skip(" Vehicle bound correctly , Skip use cases ") elif get_truck_info !=
drive_plate and bound_type == "1": print
" The current bound vehicle is not the driver's personal vehicle {0}, Unbind first ".format(get_truck_info) data["type"],
data["plate"] = "2", get_truck_info print data r =
BasePerformer.post(url, routes['bind_truck'], data=data) try:
assert r['code'] == 1000 plate = r['data']['plate'] if
plate_operate['bound_type'] == "1" else r['data'] return plate
except Exception, e: pytest.exit(" Vehicle not bound successfully , Exit use case ")
pytest.xfail(str(e))
2.3 Test case execution
In the process of writing test cases , Write and execute , There are two ways of execution :
① Command line mode :
pytest [options] [file]
What are the specific parameters $pytest --help see
The most common is -s,-m,-q
②pycharm implement
Need to py.test Implementation mode ,Pycharm Specific setting steps :File-Settings-Tools-Python Integrated Tools, See the figure below for details :
Three , Test report generation
The second step is to execute the use case , No matter what pycharm Or the command line will also record a test result , But it's not intuitive enough , So use plug-ins , More intuitive to see the test report
3.1. Simplified test report :pytest-html
(1)pytest-html install
pip install pytest-html
pip show pytest-html
# Check whether the installation is successful
(2)pytest-html Generate test report
① Switch to the directory where you want to execute the file
$pytest --html=report.html
# By default, the report will exist under the current path
② Execute the specified use case
pytest test_exception.py --html=../reports/test_exception.html
#test_exception.py For use cases that need to be executed
#../reports/ A special folder was created to store test reports , This is the directory address
#test_exception.html Sign up for the test
(3) Test run
$ pip install pytest-rerunfailures
$ pip show pytest-rerunfailures
# Failure cases rerun :
pytest --reruns 1 test_exception.py --html=../reports/test_exception.html
# add to --reruns The parameter indicates a heavy run ,1 Represents the number of heavy runs
--reruns=RERUNS #RERUNS Number of heavy runs , Default to 0
--rerunns-delay=RERUN_DELAY #RERUN_DELAY For the number of seconds after the failure to re execute , The time unit is :s
The style of the report page is as follows :
3.2.Pytest+Allure Generate test report on the computer
It needs to be prepared :
pycharm: because allure Generated html The report could not be opened , adopt pycharm Open to display normally
pytest-allure-adaptor: Specific role : That is to generate xml Test report for
allure2.7.0 Specific role :allure Tools will xml generate html Report in the form of
java1.8:allure Dependent environment
1. download pytest-allure-adaptor
pip install pytest-allure-adaptor
pip show pytest-allure-adaptor
2. Try to pass pytest-allure-adaptor How to generate test report xml file
$pytest -s -q test_trans.py --alluredir report
# soon test_trans.py The test results are stored in the report Under the directory ,report That is, create a new one in the current directory report catalog , It can also be reassigned
3. download allure,
Download the latest version allure2, After downloading and installing , take "E:\allure-2.7.0\bin" Add to environment variable , In order to use the command later , Just like other software uses the command line method , Add environment variable , Generally, you need to restart the computer after adding
4. adopt allure The order will be xml File generation html report
allure generate allure Generated tests xml File directory / -o Storage directory of test report purpose /
as :allure generate --clean allure_xml/2019-02-21 -o allure_html/2019-02-21
#--clean Represents a rewrite report
# After execution, enter the report directory , You'll see one index.html file
5. adopt pycharm Open test report
Because the direct open display is abnormal , So through the following steps , So check it in the following way
Allure Report presentation
Technology