Create your own awesome maps

Even on the go

with our free apps for iPhone, iPad and Android

Get Started

Already have an account?
Log In

unittesting_share by Mind Map: unittesting_share
0.0 stars - reviews range from 0 to 5

unittesting_share

Unit Testing Introduction

what makes a good unit testing?

It should be automated and repeatable.

It should be easy to implement.

Once it’s written, it should remain for future use.

Anyone should be able to run it.

It should run at the push of a button.

It should run quickly.

compared with integration testing

Untitled

integration testing characteristic, integration testing can be further categorized based on different integration level, level 1: module level integration testing, level 2: component level integration testing, level 3: system level integration testing, Untitled, Untitled

difference, an integration test exercises many units of code that work together to evaluate one or more expected results from the software, whereas a unit test usually exercises and tests only a single unit in isolation.

core techniques: stub and mock

Stub, Untitled, An external dependency is an object in your system that your code under test interacts with, and over which you have no control. (Common examples are filesystems, threads, memory, time, and so on.), testing type: state-based testing, State-based testing (also called state verification) determines whether the exercised method worked correctly by examining the state of the system under test and its collaborators (dependencies) after the method is exercised., example, [click or expand me to see figure] the original code with dependency, Untitled, [click or expand me to see figure] the code with stub to break dependency, Untitled, Here are some techniques for breaking dependencies:, Extract an interface to allow replacing underlying implementation. (This is the one we use in the example), Inject stub implementation into a class under test., Receive an interface at the constructor level., Receive an interface at the constructor level and save it in a field for later use., Receive an interface as a property get or set., Receive an interface as a property get or set and save it in a field for later use., Get a stub just before a method call., Receive an interface just before the call in the method under test using • a parameter to the method (parameter injection). • a factory class. • a local factory method. • variations on the preceding techniques.

Mock, Untitled, testing type: interaction testing, Interaction testing is testing how an object sends input to or receives input from other objects—how that object interacts with other objects., compared with stub, We used stubs to make sure that the code under test received all the inputs it needed so that we could test its logic independently., We use mocks to see whether the code under test calls other objects correctly. The code under test may not return any result or save any state, but it has complex logic that needs to result in correct calls to other objects., explained using picture, [click or expand me to see figure] testing using stub, Untitled, [click or expand me to see figure] testing using mock, Untitled, suggestion: one mock object per test, In a test where you test only one thing (which is how I recommend you write tests), there should be no more than one mock object. All other fake objects will act as stubs. Having more than one mock per test usually means you’re testing more than one thing, and this can lead to complicated or brittle tests.

Fake, Untitled

isolation framework, Untitled, Untitled, available isolation framework for c++: mockpp, google c++ mock framework, how to use a isolation framework?, gmock, build, gcc4.3 compile #include problem, Untitled, compile macro, GTEST_OS_CYGWIN GTEST_USE_OWN_TR1_TUPLE=1 GTEST_HAS_CLONE=0, Untitled

do we really need unit testing?

requirement: testable design, why request testable design?, Tests against our software are another type of user. That user has strict demands for our software, but they all stem from one mechanical request: testability. That request can influence the design of our software in various ways, mostly for the better., guidelines and benefits, Make methods virtual by default., This allows you to override the methods in a derived class for testing. Overriding allows for changing behavior or breaking a call to an external dependency., Use interface-based designs, This allows you to use polymorphism to replace dependencies in the system with your own stubs or mocks., Make classes nonsealed by default., You can’t override anything virtual if the class is sealed (final in Java)., Avoid instantiating concrete classes inside methods with logic. Get instances of classes from helper methods, factories, Inversion of Control containers such as Unity, or other places, but don’t directly create them., This allows you to serve up your own fake instances of classes to methods that require them, instead of being tied down to working with an internal production instance of a class., Avoid direct calls to static methods. Prefer calls to instance methods that later call statics., This allows you to break calls to static methods by overriding instance methods. (You won’t be able to override static methods.), Avoid constructors and static constructors that do logic., Overriding constructors is difficult to implement. Keeping constructors simple will simplify the job of inheriting from a class in your tests., Separate singleton logic from singleton holder., If you have a singleton, have a way to replace its instance so you can inject a stub singleton or reset it., disadvantages:, amount of work, More code, and work, is required when testability is involved, but that designing for testability makes you think about the user of your API more, which is a good thing., complexity, Designing for testability can sometimes feel a little (or a lot) like it’s overcomplicating things. You can find yourself adding interfaces where it doesn’t feel natural to use interfaces, or exposing class behavior semantics that you hadn’t considered before. In particular, when many things have interfaces and are abstracted away, navigating the code base to find the real implementation of a method can become more difficult and annoying., Exposing sensitive IP, Many projects have sensitive intellectual property that shouldn’t be exposed, but which designing for testability would force to be exposed, Sometimes you can’t, Sometimes there are political or other reasons for the design to be done a specific way, and you can’t change or refactor it. Sometimes you don’t have the time to refactor your design, or the design is too fragile to refactor., when tastable design break down, By following testable object-oriented design principles, you might get testable designs as a byproduct, but testability should not be a goal in your design., There are tools that can help replace dependencies (for example, Typemock Isolator in .NET code) without needing to refactor it for testability.

tough questions when Integrating unit testing into the organization, How much time will this add to the current process?, When asking about time, team leads may really be asking, “What should I tell my project manager when we go way past our due date?” They may actually think the process is useful but are looking for ammunition for the upcoming battle. They may also be asking the question not in terms of the whole product, but in terms of specific feature sets or functionality. A project manager or customer who asks about timing, on the other hand, will usually be talking in terms of full product releases. Because different people care about different scopes, the answers you give them may vary. For example, unit testing can double the time it takes to implement a specific feature, but the overall release date for the product may actually be reduced. To understand this, let’s look at a real example I was involved with., An example, A large company wanted to implement unit testing into their process, beginning with a pilot project. The pilot consisted of a group of developers adding a new feature to a large existing application. The company’s main livelihood was in creating this large billing application and customizing parts of it for various clients. The company had thousands of developers around the world. The following measures were taken to test the pilot’s success: ❂ The time the team took for each of the development stages ❂ The overall time for the project to be released to the client ❂ The number of bugs found by the client after the release The same statistics were collected for a similar feature created by a different team for a different client. The two features were nearly the same size, and the teams were roughly at the same skill and experience level. Both tasks were customization efforts—one with unit tests, the other without. Table in child node of this node shows the differences in time. [click this node to see] Overall, the time to release with tests was less than without tests. Still, the managers on the team with the unit tests didn’t initially believe the pilot would be a success because they only looked at the implementation (coding) statistic (the first row in table) as the criteria for success, instead of the bottom line. It took twice the amount of time to code the feature (because unit tests cause you to write more code). Despite this, the time “wasted” more than made up for itself when the QA team found fewer bugs to deal with. That’s why it’s important to emphasize that, although unit testing can increase the amount of time it takes to implement a feature, the time balances out over the product’s release cycle because of increased quality and maintainability., Untitled, Will my QA job be at risk because of this?, Unit testing doesn’t eliminate QA-related jobs. QA engineers will receive the application with full unit-test suites, which means they can make sure all the unit tests pass before they start their own testing process. Having unit tests in place will actually make their job more interesting. Instead of doing UI debugging (where every second button click results in an exception of some sort), they will be able to focus on finding more logical (applicative) bugs in real-world scenarios. Unit tests provide the first layer of defense against bugs, and QA work provides the second layer—the user’s acceptance layer. As with security, the application always needs to have more than one layer of protection. Allowing the QA process to focus on the larger issues can produce better applications. In some places, QA engineers write code, and they can help write unit tests for the application. That happens in conjunction with the work of the application developers and not instead of it. Both developers and QA engineers can write unit tests., How do we know this is actually working?, [click or expand me to see figure] To determine whether your unit testing is working, create a metric of some sort, as discussed in section 8.2.5. If you can measure it, you’ll have a way to know; plus, you’ll feel it. Figure for this node [click me to see] shows a sample test-code-coverage report (coverage per build). Creating a report like this, by running a tool like NCover for .NET automatically during the build process, can demonstrate progress in one aspect of development. Code coverage is a good starting point if you’re wondering whether you’re missing unit tests., Untitled, Is there proof that unit testing helps?, There aren’t any specific studies I can point to on whether unit testing helps achieve better code quality. Most related studies talk about adopting specific agile methods, with unit testing being just one of them. Some empirical evidence can be gleaned from the web, of companies and colleagues having great results and never wanting to go back to a code base without tests. A few studies on TDD can be found at http://biblio.gdinwiddie.com/ biblio/StudiesOfTestDrivenDevelopment., Why is the QA department still finding bugs?, The job of a QA engineer is to find bugs at many different levels, attacking the application from many different approaches. Usually a QA engineer will perform integration-style testing, which can find problems that unit tests can’t. For example, the way different components work together in production may point out bugs even though the individual components pass unit tests (which work well in isolation). In addition, a QA engineer may test things in terms of use cases or full scenarios that unit tests usually won’t cover. That approach can discover logical bugs or acceptance-related bugs and is a great help to ensuring better project quality. A study by Glenford Myre showed that developers writing tests were not really looking for bugs, and so found only half to two-thirds of the bugs in an application. Broadly, that means there will always be jobs for QA engineers, no matter what. Although that study is 30 years old, I think the same mentality holds today, which makes the results still relevant today., We have lots of code without tests: where do we start?, Studies conducted in the 1970s and 1980s showed that, typically, 80 percent of the bugs are found in 20 percent of the code. The trick is to find the code that has the most problems. More often than not, any team can tell you which components are the most problematic. Start there. You can always add some metrics relating to the number of bugs per class. Testing legacy code requires a different approach than when writing new code with tests. See chapter 9 for more details., We work in several languages: is unit testing feasible?, Sometimes tests written in one language can test code written in other languages, especially if it’s a .NET mix of languages. You can write tests in C# to test code written in VB.NET, for example. Sometimes each team writes tests in the language they develop in: C# developers can write tests in C# using NUnit or MbUnit, and C++ developers can write tests using one of the C++ oriented frameworks, such as CppUnit. I’ve also seen solutions where people who wrote C++ code would write managed C++ wrappers around it and write tests in C# against those managed C++ wrappers, which made things easier to write and maintain., What if we develop a combination of software and hardware?, If your application is made of a combination of software and hardware, you need to write tests for the software. Chances are, you already have some sort of hardware simulator, and the tests you write can take advantage of this. It may take a little more work, but it’s definitely possible, and companies do this all the time., How can we know we don’t have bugs in our tests?, You need to make sure your tests fail when they should and pass when they should. Test-driven development is a great way to make sure you don’t forget to check those things., My debugger shows that my code works: why do I need tests?, You may be sure your code works fine, but what about other people’s code? How do you know it works? How do they know your code works and that they haven’t broken anything when they make changes? Remember that coding is just the first step in the life of the code. Most of its life, the code will be in maintenance mode. You need to make sure it will tell people when it breaks, using unit tests. A study held by Curtis, Krasner, and Iscoe showed that most defects don’t come from the code itself, but result from miscommunication between people, requirements that keep changing, and a lack of application domain knowledge. Even if you’re the world’s greatest coder, chances are that, if someone tells you to code the wrong thing, you’ll do it. And when you need to change it, you’ll be glad you have tests for everything else to make sure you don’t break it., Must we do TDD-style coding?, TDD is a style choice. I personally see a lot of value in TDD, and many people find it productive and beneficial, but others find that writing the tests after the code is good enough for them. You can make your own choice.

working with legacy code, problems, It was difficult to write tests against existing code., It was next to impossible to refactor the existing code (or there was not enough time to do it)., Some people didn’t want to change their designs., Tooling (or lack of tooling) was getting in the way., It was difficult to determine where to begin., strategies, Where do you start adding tests?, [expand me to see figure and table] Assuming you have existing code inside components, you’ll need to create a priority list of components for which testing makes the most sense. There are several factors to consider that can affect each component’s priority: ❂ Logical complexity—This refers to the amount of logic in the component, such as nested ifs, switch cases, or recursion. Tools for checking cyclomatic complexity can also be used to determine this. ❂ Dependency level—This refers to the number of dependencies in the component. How many dependencies do you have to break in order to bring this class under test? Does it communicate with an outside email component, perhaps, or does it call a static log method somewhere? ❂ Priority—This is the component’s general priority in the project. You can give each component a rating for these factors, from 1 (low priority) to 10 (high priority). Table in child node shows a short list of classes with ratings for these factors. I call this a test-feasibility table. From the data in table in the child node, we can create the diagram shown in figure 9.1, which graphs our components by the amount of value to the project and number of dependencies. We can safely ignore items that are below our designated threshold of logic (which I usually set at 2 or 3), so Person and ConfigManager can beignored. We’re left with only the top two components from figure 9.1. There are two basic ways to look at the graph and decide what you’d like to test first (see figure 9.2): ❂ Choose the one that’s more complex and easier to test (top left). ❂ Choose the one that’s more complex and harder to test (top right). The question now is what path you should take. Should you start with the easy stuff or the hard stuff?, A simple test-feasibility table, Untitled, [click or expand me to see figure] figure 9.1, Untitled, [click or expand me to see figure] figure 9.2, Untitled, Choosing a selection strategy., As “Where do you start adding tests” explained, you can start with the components that are easy to test or the ones that are hard to test (because they have many dependencies). Each strategy presents different challenges., Writing integration tests before refactoring, If you do plan to refactor your code for testability (so you can write unit tests), a practical way to make sure you don’t break anything during the refactoring phase is to write integration-style tests against your production system., The process is relatively simple: ❂ Add one or more integration tests (no mocks or stubs) to the system to prove the original system works as needed. ❂ Refactor or add a failing test for the feature you’re trying to add to the system. ❂ Refactor and change the system in small chunks, and run the integration tests as often as you can, to see if you break something., Important tools for legacy code unit testing, Isolate dependencies easily with Typemock Isolator [.Net], Find testability problems with Depender [.Net], Use JMockit for Java legacy code [Java], Use Vise while refactoring your Java code [java], Use FitNesse for acceptance tests before you refactor [.Net and Java], Read Michael Feathers’ book on legacy code <Working Effectively with Legacy Code>, Use NDepend to investigate your production code [.Net], Use ReSharper to navigate and refactor production code [.Net], Detect threading issues with Typemock Racer [.Net]

CppUnit framework

definition

History of SUnit, Untitled

CppUnit is the C++ version of SUnit

cppunit class analysis

analysis, Untitled

classes for management/organization, Test, TestLeaf, TestComposite, TestSuite, TestPath

classes for testing, TestCase, TestCaller, TestFixture, TestRunner, TestListener, TestResult, TestResult manages the TestListener (registration and event dispatch), as well as the stop flag indicating if the current test run should be interrupted., TestResultCollector, A TestResultCollector is a TestListener which collects the results of executing * a test case. It is an instance of the Collecting Parameter pattern

UML class diagram, Untitled

how to coding

Method1: TestCase + TestResult + TestResultCollector + TextOutputter, Untitled

Method2: TestCase + TestRunner, Untitled

Method3: TestCaller + TestResult, Untitled

Method4: TestCase + TestSuite(Macro) + TestRunner + Registry, Untitled

Method5: TestCaller + TestSuite (no Macro) + Test Runner + no Registry, Untitled

Method6: TestFixture + TestSuite(Macro) + TestRunner + Registry, This is the most recommended to write test code using CppUnit, example, Untitled

Other coding notes, understand the pointers in cppunit, Untitled, customize the output format, Untitled, track the testing time, Untitled, macro's, Macro used to automatically generate TestCaller and TestSuite, CPPUNIT_TEST_SUITE, CPPUNIT_TEST_SUB_SUITE, CPPUNIT_TEST, CPPUNIT_TEST_SUITE_END, Macro to automatically add fixture suite to registry, CPPUNIT_TEST_SUITE_NAMED_REGISTRATION

how to compile/build

compile with cppunit, g++ <C/C++ file> -I$CPPUNIT_HOME/include –L$CPPUNIT_HOME/lib -lcppunit

more information

cppunit sourceforge index

cppunit documentation

cppunit wiki

CppUnit in SeP system

to be tested project

Untitled, IDFC_app

project description, This is exactly the same project that NOBE team is developing, no need to add any change to this project.

testing framework project

project name, Untitled

project description, It is a library project. Its output can static or dynamic library., Untitled, In order to be used by SeP projects, it is imported and configured as a QNX project in QNX Momentics IDE, Once it is built and the library which it generates is copied to the required place (which in our case is $cppunit_root_folder/lib/ ), no need to build it a second time.

project download, click me to download

testing projects

project 1, project name, Untitled, depend on projects, to be tested project, testing framework project, project description, this project tests only the bexphandler module of the IDCF_app application, this project uses stub/mock techniques to isolate the bexphandler module code under test., project files layout, Untitled, project settings, compile_inc, Untitled, compile_src, Untitled, compile_macro, Untitled, link_libraries, picture, Untitled, since mcsserver component is stub'ed, mcsserver library is not linked, code analysis, code under test (CBEXPHandler::sendMsgToMCS()), analysis, input, all the parameters, output, return value, log, global variable status: CBEXPHandler::sDeviceClientList[kusMaxNumClient], dependency, CIDFCLogger::DebugLog, CBEXPHandler::CBEXPHandler_setClientInBusyState(), CBEXPHandler::CBEXPHandler_setClientNotResponding(), CMCSServerInterface::instance()->getTargetIsConnected(), CMCSServerFacade::getTargetIsConnected(SDeviceID sTrgtDeviceID), CMCSServerInterface::instance()->sendCommand(), CMCSServerFacade::sendCommand(), sendMsgToMCS() is a private method of CBEXPHandler, how could the testing code invoke it and test it?, how many times CMCSServerInterface::instance()->sendCommand() gets invoked is important for this function, how to test?, test strategy, test output, return value, can check the correctness by directly getting the return value, log, has dependency on CIDFCLogger::DebugLog(), do not test., global variable, has dependency on CBEXPHandler::CBEXPHandler_setClientInBusyState and CBEXPHandler::setClientNotResponding(), do not test., break the dependency, use empty stub for CIDFCLogger::DebugLog(), use real code for CBEXPHandler::CBEXPHandler_setClientInBusyState() and CBEXPHandler::CBEXPHandler_setClientInBusyState(), since these two functions are in the same class as the code under test., by looking into the implementation of CMCSServerInterface, it is a simple wrapper of CMCSServerFacade. Since bexphandler module has real implementation for CMCSServerInterface, we can instead use fake CMCSServerFacade to break the dependency that CBEXPHandler::sendMsgToMCS() has on CMCSServerInterface., This is not a good practice. For most of time, it is supposed to use a fake object to replace a function's direct dependent object to break the dependency., Why I do not use a fake CMCSServerInterface is because in QNX IDE, I have no way to compile the code under test without compiling the real CMCSServerInterface class, since file bexphandler.cpp and MCSServerInterface.cpp are under the same folder. In QNX IDE, there is no way to compile one file while excluding another file if they are under same folder., use macro in project setting to allow the testing code to test the private function, use mock technique in the fake CMCSServerFacade class to record the interaction information, testing code, main, #include <cstdlib> #include <iostream> #include <cppunit/ui/text/TestRunner.h> #include <cppunit/extensions/TestFactoryRegistry.h> int main(int argc, char *argv[]) { CppUnit::TestFactoryRegistry &registry = CppUnit::TestFactoryRegistry::getRegistry(); registry.registerFactory( &(CppUnit::TestFactoryRegistry::getRegistry( "BEXPHandler" )) ); CppUnit::Test *test = registry.makeTest(); CppUnit::TextTestRunner runner; runner.addTest(test); runner.run(); return 0; }, CBEXPHandlerTest (for example), header, #ifndef DEVICEREGISTRATIONHOSTTEST_H_ #define DEVICEREGISTRATIONHOSTTEST_H_ // Cppunit framework header file #include <cppunit/TestFixture.h> #include <cppunit/extensions/HelperMacros.h> // Tested class forwards class CBEXPHandler; namespace SeP_UNIT_TESTING { class CBEXPHandlerTest : public CppUnit::TestFixture { CPPUNIT_TEST_SUITE( CBEXPHandlerTest ); CPPUNIT_TEST( test_sendMsgToMCS ); CPPUNIT_TEST_SUITE_END(); public: void setUp(); void tearDown(); protected: void test_sendMsgToMCS(); private: CBEXPHandler* testedObject; }; } #endif /*DEVICEREGISTRATIONHOSTTEST_H_*/, cpp, includes, Untitled, setup/teardown, void CBEXPHandlerTest::setUp() { testedObject = CBEXPHandler::getInstance(); } void CBEXPHandlerTest::tearDown() { if (NULL != testedObject) delete testedObject; }, testing function: test_sendMsgToMCS, void CBEXPHandlerTest::test_sendMsgToMCS() { // ------------------------------------ // Declare variables // ------------------------------------ // parameters SDeviceID devID; char *pcMsg = NULL; unsigned short usMsgLength = 0; unsigned short usCommandID = 0; unsigned short usDestinationID = 0; // verify helpers short result = 0; short expectedResult = 0; int iCounterBefore = 0; int iCounterAfter = 0; CMCSServerFacade_stub* dependency_stub = dynamic_cast<CMCSServerFacade_stub*>(CMCSServerFacade::instance()); // ---------------------------------------------------- // Testing Scenario 1 // prequisite: // target is disconnected // expected return: // TARGET_NOT_CONNECTED // // ---------------------------------------------------- // prepare data devID.usDeviceType = 0; devID.ulDeviceSerialNo = 7; // so that target will be disconneted, // see stub code. // testing result = testedObject->sendMsgToMCS(devID, pcMsg, usMsgLength, usCommandID, usDestinationID); // verify expectedResult = TARGET_NOT_CONNECTED; CPPUNIT_ASSERT_EQUAL(expectedResult, result); // ---------------------------------------------------- // Testing Scenario 2 // prequisite: // target is connected // expected return: // // ---------------------------------------------------- // prepare data devID.usDeviceType = 0; devID.ulDeviceSerialNo = 1; // target will be in conneted, see stub code. // Turn on counter ----> dependency_stub->TriggerSentCounter(true); // // case 1, response ok // usCommandID = MCS_SVR_MSG_TARGET_SUB_ID_RESPONSE_OK; iCounterBefore = dependency_stub->GetSentCounter(); result = testedObject->sendMsgToMCS(devID, pcMsg, usMsgLength, usCommandID, usDestinationID); iCounterAfter = dependency_stub->GetSentCounter(); // verify CPPUNIT_ASSERT_EQUAL(1, iCounterAfter - iCounterBefore); expectedResult = 1; CPPUNIT_ASSERT_EQUAL(expectedResult, result); // // case 2, not supported: // usCommandID = MCS_SVR_MSG_TARGET_SUB_ID_RESPONSE_NOT_SUPPORT; iCounterBefore = dependency_stub->GetSentCounter(); result = testedObject->sendMsgToMCS(devID, pcMsg, usMsgLength, usCommandID, usDestinationID); iCounterAfter = dependency_stub->GetSentCounter(); // verify CPPUNIT_ASSERT_EQUAL(3, iCounterAfter - iCounterBefore); expectedResult = COMMAND_SENT_NOT_PROCESSED; CPPUNIT_ASSERT_EQUAL(expectedResult, result); // // case 3, decode failure: // usCommandID = MCS_SVR_MSG_TARGET_SUB_ID_RESPONSE_DECODE_FAILED; iCounterBefore = dependency_stub->GetSentCounter(); result = testedObject->sendMsgToMCS(devID, pcMsg, usMsgLength, usCommandID, usDestinationID); iCounterAfter = dependency_stub->GetSentCounter(); // verify CPPUNIT_ASSERT_EQUAL(3, iCounterAfter - iCounterBefore); expectedResult = SEND_OK_BUT_CRC_ERR; CPPUNIT_ASSERT_EQUAL(expectedResult, result); // // case 4, other responses other than case 1-3, should we retry? // how about MCS_SVR_MSG_TARGET_SUB_ID_REQUEST? // // pick MCS_SVR_MSG_TARGET_SUB_ID_RESPONSE_INVALID_TARGET_ID as one of // the other responses usCommandID = MCS_SVR_MSG_TARGET_SUB_ID_RESPONSE_INVALID_TARGET_ID; iCounterBefore = dependency_stub->GetSentCounter(); result = testedObject->sendMsgToMCS(devID, pcMsg, usMsgLength, usCommandID, usDestinationID); iCounterAfter = dependency_stub->GetSentCounter(); // verify, CPPUNIT_ASSERT_EQUAL(3, iCounterAfter - iCounterBefore); //should we retry? expectedResult = GENERAL_SEND_ERROR; CPPUNIT_ASSERT_EQUAL(expectedResult, result); // ---------------------------------------------------- // Testing Scenario 3 // set 0x20 as the command ID so that the stub code knows // to give back the response of SEND_COMMAND_FAILED. // ---------------------------------------------------- usCommandID = 0x20; iCounterBefore = dependency_stub->GetSentCounter(); result = testedObject->sendMsgToMCS(devID, pcMsg, usMsgLength, usCommandID, usDestinationID); iCounterAfter = dependency_stub->GetSentCounter(); // verify, CPPUNIT_ASSERT_EQUAL(3, iCounterAfter - iCounterBefore); expectedResult = SEND_COMMAND_FAILED; CPPUNIT_ASSERT_EQUAL(expectedResult, result); // <---- Turn off counter dependency_stub->TriggerSentCounter(false); dependency_stub->ResetSentCounter(); }, registration using macro, CPPUNIT_TEST_SUITE_NAMED_REGISTRATION(CBEXPHandlerTest, "BEXPHandler");, fake_IDFC, except for the code under testing, for this case it is bexphandler module, all the other implementation of IDFC_app application are stub code., by default a stub function is empty implemented, which simply return a value as its declaration requires., only if the code under test require a stub function to give a specific value, and the default stub implementation can't satisfy it, will the stub function be re-written., fake_LTA_component, in order to completely isolate the code under test, all the LTA component library also should be stub'ed, same as fake IDFC code, by default the fake LTA component code is empty implemented and only when required should be re-written, CMCSServerFacade and CMCSServerFacade_stub, CMCSServerFacade_stub is the derived class of CMCSServerFacade, CMCSServerFacade is a singleton class, when creating the instance, it is created as of CMCSServerFacade_stub type., /** ------------------------------------------------------------ * Desc: This stub method is not thread-safe, * return the stub object instead so that * we can have more control over the stub * object. * ------------------------------------------------------------ */ CMCSServerFacade* CMCSServerFacade::instance() { if (0 == _instance) { _instance = new CMCSServerFacade_stub; } return _instance; }, By internally using instance of CMCSServerFacade_stub type, the stub CMCSServerFacade is able to have more functionality, like, it can record its invoke times by its caller. [expand me to see the example code], Untitled, project download, click me to download

project2, project name, Untitled, depend on projects, to be tested project, testing framework project, project description, This project does not use any of the stub/mock techniques, it is kind of integration testing rather than unit testing. All the code under test and its dependency are real code., project files layout, Untitled, project settings, compile_inc, Untitled, compile_src, Untitled, compile_macro, Untitled, link_libraries, picture, Untitled, link all the component libraries., code analysis, code under test (CDeviceRegistrationHost::CDEVREGHost_removeClientFromClientList()), analysis, input, input parameters, unsigned short usClientID, SClientListStruct *psClientList, unsigned short &usNumExistingClients, output, return value, output parameters, unsigned long &ulBEXPCtrlBits, unsigned long &ulAllDevicesCtrlBits, test strategy, test output, check return value, can check the correctness by directly getting the return value, check out parameters if necessary, no need to test interaction with other functions, testing code, main, #include <cstdlib> #include <iostream> #include <cppunit/ui/text/TestRunner.h> #include <cppunit/extensions/TestFactoryRegistry.h> int main(int argc, char *argv[]) { CppUnit::TestFactoryRegistry &registry = CppUnit::TestFactoryRegistry::getRegistry(); registry.registerFactory( &(CppUnit::TestFactoryRegistry::getRegistry( "BEXPHandler" )) ); CppUnit::Test *test = registry.makeTest(); CppUnit::TextTestRunner runner; runner.addTest(test); runner.run(); return 0; }, DeviceRegistrationHostTest, header, #ifndef DEVICEREGISTRATIONHOSTTEST_H_ #define DEVICEREGISTRATIONHOSTTEST_H_ // Cppunit framework header file #include <cppunit/TestFixture.h> #include <cppunit/extensions/HelperMacros.h> // Tested class forwards class CDeviceRegistrationHost; namespace SEP_UNIT { class DeviceRegistrationHostTest : public CppUnit::TestFixture { CPPUNIT_TEST_SUITE( DeviceRegistrationHostTest ); CPPUNIT_TEST( testCDEVREGHost_removeClientFromClientList ); CPPUNIT_TEST_SUITE_END(); public: void setUp(); void tearDown(); protected: void testCDEVREGHost_removeClientFromClientList(); private: CDeviceRegistrationHost* testedObject; }; } #endif /*DEVICEREGISTRATIONHOSTTEST_H_*/, cpp, Untitled, project download, click me to download

resources

empty implemented stub code for IDFC and LTA component download

how to navigate this guidence

Navigation in Map

To move the cursor up, down, left or right, use arrow keys.

To move to the top of the current subtree, press PageUp.

To move to the bottom of the current subtree, press PageDown.

To move to the central node, press Escape.

To move back and forth in the history of visited nodes use Navigate > Previous (or press Alt + Left), or Navigate > Next (or press Alt + Right), respectively.

For visiting all nodes of a map conveniently use Navigate > Next node (Ctrl + Alt + Right) and Navigate > Previous node (Ctrl + Alt + Left).

If you want nodes to be folded again after visiting them use the menu choices "Navigate > Next node (fold)" (or press Ctrl + Alt + Shift + Right), and "Navigate > Previous node (fold)" (or press Ctrl + Alt + Shift + Left).

By default nodes are selected by placing the mouse cursor over a node (after a short delay).

This can be changed via Tools > Preferences > Behaviour > Selection Method

Available selection methods are "By Click", "Direct", and "Delayed" (default)

Selecting multiple nodes

To select multiple nodes, hold Ctrl or Shift while clicking.

To add single nodes to already selected nodes, hold Ctrl when clicking.

To select a continuous range of nodes, hold Shift when clicking, or hold Shift while moving around with arrow keys.

To select a complete subtree, use Ctrl + Shift + A, or hold Shift while moving with arrow keys from a node to its parent, or hold Alt Gr while clicking.

To cancel the selection of multiple nodes, click on the map background or on an unselected node.

Folding and unfolding

A folded node is marked with a small circle attached in the direction farthest from the root node.

To fold a node use Toggle > Folded or press the Space bar.

To unfold a node use Toggle > Folded, or press the Space bar, or press the arrow key in the direction of unfolding.

To fold or unfold nodes in levels, hold Alt while using mousewheel, or press Alt + PageUp or Alt + PageDown. With large maps, use this function carefully; it may lead to memory problems.

To unfold all nodes use Navigate > Unfold All, or press the circled plus button in the main toolbar, or press Alt + End.

To fold all nodes use Navigate > Fold All, or click the circled minus button in the main toolbar, or press Alt + Home.

Scrolling the map

To scroll the map, drag the background and move it around, or use the mouse wheel. To scroll horizontally with mouse wheel, hold the Shift key or, on some operating systems, hold one of the mouse buttons.

Zooming

The View menu includes the commands Zoom In, Zoom Out, and Zoom to Fit to Page.

The main toolbar contains a zoom control field in with presets for various percentages. These settings may also be chosen by pressing Alt + up or down arrow.

On some operating systems you may zoom by using the mouse wheel while holding the Ctrl key.

Searching and Filtering

You can search and filter nodes based on its text, icons, creation/modification time, position, priority, connectors, hyperlinks, notes and attributes.

To find text or other criteria in a node and all its descendant nodes, use Edit > Find... or press Ctrl + F.

To find the next match of your previous search, use Edit > Find Next or press Ctrl + G.

The search is a breadth-first search. In other words, the deeper a node, the later it will be found.

This search method currently works only on the currently-selected node and its descendants. In the future there may be an option to continue the search through the rest of the map.

To search the entire map for all occurrences of matching text, with an optional to supply replacement text, use "Edit > Find and Replace..." A similar method can be used across all currently-open maps by choosing "Edit >Find and Replace in all maps."

You can build filters to see only a subset of map nodes. Use the Filter Toolbar or the Filter Menu.

CppUnit Framework License

GNU Lesser General Public License

GNU LESSER GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/> Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. This version of the GNU Lesser General Public License incorporates the terms and conditions of version 3 of the GNU General Public License, supplemented by the additional permissions listed below. 0. Additional Definitions. As used herein, "this License" refers to version 3 of the GNU Lesser General Public License, and the "GNU GPL" refers to version 3 of the GNU General Public License. "The Library" refers to a covered work governed by this License, other than an Application or a Combined Work as defined below. An "Application" is any work that makes use of an interface provided by the Library, but which is not otherwise based on the Library. Defining a subclass of a class defined by the Library is deemed a mode of using an interface provided by the Library. A "Combined Work" is a work produced by combining or linking an Application with the Library. The particular version of the Library with which the Combined Work was made is also called the "Linked Version". The "Minimal Corresponding Source" for a Combined Work means the Corresponding Source for the Combined Work, excluding any source code for portions of the Combined Work that, considered in isolation, are based on the Application, and not on the Linked Version. The "Corresponding Application Code" for a Combined Work means the object code and/or source code for the Application, including any data and utility programs needed for reproducing the Combined Work from the Application, but excluding the System Libraries of the Combined Work. 1. Exception to Section 3 of the GNU GPL. You may convey a covered work under sections 3 and 4 of this License without being bound by section 3 of the GNU GPL. 2. Conveying Modified Versions. If you modify a copy of the Library, and, in your modifications, a facility refers to a function or data to be supplied by an Application that uses the facility (other than as an argument passed when the facility is invoked), then you may convey a copy of the modified version: a) under this License, provided that you make a good faith effort to ensure that, in the event an Application does not supply the function or data, the facility still operates, and performs whatever part of its purpose remains meaningful, or b) under the GNU GPL, with none of the additional permissions of this License applicable to that copy. 3. Object Code Incorporating Material from Library Header Files. The object code form of an Application may incorporate material from a header file that is part of the Library. You may convey such object code under terms of your choice, provided that, if the incorporated material is not limited to numerical parameters, data structure layouts and accessors, or small macros, inline functions and templates (ten or fewer lines in length), you do both of the following: a) Give prominent notice with each copy of the object code that the Library is used in it and that the Library and its use are covered by this License. b) Accompany the object code with a copy of the GNU GPL and this license document. 4. Combined Works. You may convey a Combined Work under terms of your choice that, taken together, effectively do not restrict modification of the portions of the Library contained in the Combined Work and reverse engineering for debugging such modifications, if you also do each of the following: a) Give prominent notice with each copy of the Combined Work that the Library is used in it and that the Library and its use are covered by this License. b) Accompany the Combined Work with a copy of the GNU GPL and this license document. c) For a Combined Work that displays copyright notices during execution, include the copyright notice for the Library among these notices, as well as a reference directing the user to the copies of the GNU GPL and this license document. d) Do one of the following: 0) Convey the Minimal Corresponding Source under the terms of this License, and the Corresponding Application Code in a form suitable for, and under terms that permit, the user to recombine or relink the Application with a modified version of the Linked Version to produce a modified Combined Work, in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source. 1) Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (a) uses at run time a copy of the Library already present on the user's computer system, and (b) will operate properly with a modified version of the Library that is interface-compatible with the Linked Version. e) Provide Installation Information, but only if you would otherwise be required to provide such information under section 6 of the GNU GPL, and only to the extent that such information is necessary to install and execute a modified version of the Combined Work produced by recombining or relinking the Application with a modified version of the Linked Version. (If you use option 4d0, the Installation Information must accompany the Minimal Corresponding Source and Corresponding Application Code. If you use option 4d1, you must provide the Installation Information in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source.) 5. Combined Libraries. You may place library facilities that are a work based on the Library side by side in a single library together with other library facilities that are not Applications and are not covered by this License, and convey such a combined library under terms of your choice, if you do both of the following: a) Accompany the combined library with a copy of the same work based on the Library, uncombined with any other library facilities, conveyed under the terms of this License. b) Give prominent notice with the combined library that part of it is a work based on the Library, and explaining where to find the accompanying uncombined form of the same work. 6. Revised Versions of the GNU Lesser General Public License. The Free Software Foundation may publish revised and/or new versions of the GNU Lesser General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Library as you received it specifies that a certain numbered version of the GNU Lesser General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that published version or of any later version published by the Free Software Foundation. If the Library as you received it does not specify a version number of the GNU Lesser General Public License, you may choose any version of the GNU Lesser General Public License ever published by the Free Software Foundation. If the Library as you received it specifies that a proxy can decide whether future versions of the GNU Lesser General Public License shall apply, that proxy's public statement of acceptance of any version is permanent authorization for you to choose that version for the Library.