Project results

Overall, in all tasks, the work proposed in terms of activities was carried out successfully. Nevertheless there were some minor deviates from the proposal in Tasks 1 and 2. Tasks 3 and 4 were completed successfully. Next, for each task, we present the work done.

Input Vector Generator for System-Level Descriptions

The objective of this task was to improve the generation of input vectors for system-level de- scriptions. Given a system-level description written in a high-level language input vectors were generated that achieve a user-specified level of coverage. The methods we proposed are based on observability coverage, meaning that we cannot assume that exercising a given path execution guarantees that all instructions in that path are covered. In this task we carried out the following activities:
  • Installation and/or updating the necessary software to use in software development for sim- ulation and testing systems;
  • Study of existing validation tools for software and hardware. We studied the syntax of SystemC language and the way this language was implemented in C++. The goal of this activity was to try to discover tools and technologies compatible with SystemC;
  • Study and survey of parsers for SystemC. Parsers allow extraction of information from the program’s source code and its manipulation. The following parsers were studied: PinaVM, XOgastan, GCC / GAWK / Graphviz. We chose to use the PinaVM due to the use of the LLVM framework which allows a greater control on the source code;
  • We developed our own tool, based on the LLVM framework, in which we were able to draw a graph with all the information necessary for our analysis. We used an LLVM IR input extracted from one of intermediate stages when using the PinaVM analyser. At this stage we could have a graphical representation of the test program;
  • Improvements in the graph extraction tool. One of these enhancements was a better under- standing of the information to be extracted to avoid irrelevant information and obtaining the information in a better format file;
  • Implementation of methods to compute two coverage metrics: statement coverage and ob- servability coverage.
From this Task, an article was submitted to the journal ”Software Testing, Verification and Reliability”. Also, a master thesis was concluded and will be presented in October 2015. One report from the research grants was also produced.

Modeling the Variable Dependencies of an Execution Path

The objective of this task was to improve the method of modeling the variable dependencies of an execution path. In order to correctly model every possible variable in a program written in a high- level language we used a dynamic approach where the program is executed while the LP problem is being built. For this approach we studied and used solvers based on Satisfiability Modulo Theories (SMT). These solvers allow the almost direct modeling of software expressions without having to greatly modify the source code of the program under test. In this task we completed the following activities:
  • Study and implementation of tools for SystemC that would allow changing the source code;
  • Study of smt solvers, namely its input and output syntax;
  • Implementation of code modifications in order to model the dependencies between variables in a dynamic way.
From this Task (with work done on Task 1), we produced a master thesis. Also, from this Task and Task 1 we produced three reports from the research grants.

Adaptive Filter for Echo Canceling

In this task we proposed to implement an adaptive filter system for echo cancellation in a video conference system between two sites, A and B. When someone talks on one side of a video conference system (site A) he/she receives in his/her loudspeaker the voice of the other speaker (on site B) and his/her own voice echoed. This echo results from capturing, in the microphone on side B his/her own voice produce by the loudspeakers (also on site B). With an adaptive filter system (on each site) we are able to reduce the echo produced in the video conference system. An adaptive filter is required because the echo characteristics change with the relative position of the microphone and loudspeakers, type of room, position and movement of people in the room, etc.
An adaptive filter has two main components. One is the filter itself, with capability to change the coefficients values and hence its frequency response. The other is the algorithm that updates the coefficients at each new iteration. This updating process involves the use of a cost function which uses a criterion for optimum performance of the filter.
In this task we successfully completed all the activities we proposed to do and concluded this task by implementing, in fpga, a prototype of an echo cancellation system using hardware and software. Namely, the following activities were done:
  • Study of the various types of adaptive filters to help choose what best fits an echo cancellation filter;
  • Implementation, in software (c language), of the adaptive filter based on the lms algorithm (Least Mean Square);
  • Installation and configuration of the Atlys, the Digilent board, which contain a Spartan-6 fpga, and of the necessary devices for the reception and processing of audio data;
  • Implementation in hardware, in a fpga (using vhdl), of the filter adaptive algorithm based on lms (Least Mean Square);
  • Implementation in software (using the c language) of the lms algorithm that allows the computation of the adaptive filter coefficients;
  • Implementation of a microprocessor in the fpga and its interface to the filter hardware;
  • Implementation and test of the system with the software that calculates the adaptive filter coefficients to be executed in microprocessor FPGA.
From this task we published an article in the ISCAS conference. A report from the research grant was also produced.

Hardware/Software Co-validation Tool

One of the most critical steps of embedded system design is the integration of software with hardware. Traditionally, this step was done late in the system design cycle by running the embedded software on a physical chip. Thus verification of the software and of the hardware was done separately until constructing the hardware prototype. For that, in this Task, we proposed to use Instruction Set Simulators to run the software and SystemC to handle the hardware part. An Instruction Set Simulator (ISS) is a simulation model, usually coded in a high-level programming language, which mimics the behavior of a mainframe or microprocessor by reading instructions and maintaining internal variables which represent the processor’s registers. We proposed to study two methods to do this integration. One of the methods consisted in having the hardware controlled by the ISS by integrating the ISS in the SystemC description as a module. The other method was to have the SystemC control the ISS by having SystemC issuing debugging commands. In this task we also proposed to apply our input vector generation methodology to both methods.
In this task we successfully carried out all the activities we proposed to do with the exception of applying our methodology to the two integration methods. This last step was not possible due to delays in Tasks 1 and 2. The activities that were done in this Task were the following:
  • Study of various isss, including the tools from Synopsis and Mentor;
  • Study and understanding of the inner workings of the SystemC platform in order to know how to do the interface with the isss;
  • Modification of the SystemC platform in order to be able to communicate with the isss;
  • Adding of several instructions to the iss in order to communicate with the SystemC platform;
  • Implementation and testing of the integration of the SystemC with the iss using both methdos as described above.
In this Task, a report from the research grant was produced.

Collaboration with WinTrust

From our work in Tasks 1 and 2 we started a collaboration with the Portuguese company WinTrust where we were able to apply the methods we researched in the project. WinTrust is a consulting and Information System certification company. It is the only company operating on the Portuguese market that combines expertise on Testing Methodologies best practices with a total independence from any system integrator. WinTrusts offer focuses on the areas of Software Testing, Software Certification, Test Methodology Consulting Services and Acceptance Tests and Mediation Services. In our collaboration with WinTrust we developed a tool, ”SOA - Test Accelerator”, to au- tomatically characterize and test service oriented architectures. The need for such a tool comes from the increasing difference between testing some independent services and testing their overall interaction. As the system architecture grows in number of services, manually creating test case scenarios becomes a heavy burden. SOA-TAs ultimate goal was to reduce the time spent on com- bining and orchestrating service calls to simulate a business process. The work was divided in five stages. First, the automatic generation of test cases through process descriptions analysis, having business requirements in consideration. Second, the generation of the input set required to execute these test cases. Third, the production of specific service calls, by means of test scripts to be run on Apache JMeter. Fourth, the execution of these scripts and fifth, showing the results. SOA-TA will be useful for operations that rely on consecutive service calls, and need to ensure the overall system compliance with previously set requirements.
In this collaboration Prof. José Costa also presented the paper ”Importância do centro NearShore e o aumento da eficiência na Automatizacão de Testes” in the Testing Portugal 2014 conference. This was a good opportunity to present and promote the Cervantes project.
research group/institution
support