Model-Driven Skills Assessment in Knowledge Management Systems

Antonio Balderas1, Juan Antonio Caballero-Hernández2, Juan Manuel Dodero1, Manuel Palomo-Duarte1 and Iván Ruiz-Rube1

1 Department of Computer Science, University of Cadiz, Av. de la

Universidad de Cá diz 10, 11519, Puerto Real, Spain

2EVAL for Research Group, University of Cadiz, Av. República Árabe Saharaui s/n, 11519, Puerto Real, Spain E-mail: antonio.balderas@uca.es; juanantonio.caballero@uca.es; juanma.dodero@uca.es; manuel.palomo@uca.es; ivan.ruiz@uca.es

Received 15 January 2019;
Accepted 03 June 2019

Abstract

Organizations need employees who perform satisfactorily in generic skills, such as teamwork, leadership, problem solving or interpersonal abilities, among others. In organizational environments, employees perform work that is not always visible for supervisors and, thus, they can hardly assess their performance in generic skills. By using a knowledge management system, the users are able to leave a trace of their activity in the system’s records. This research aims to address a computer supported assessment of the user’s generic skills from the perspective of Model-Driven engineering. First, a systematic mapping study is carried out to understand the state of the art. Second, a proposal based on Model-Driven engineering is presented and is then validated through an organizational learning process model. Our results are promising and we are able to conduct a scalable assessment based on objective indicators of the employee’s planning, time management and problem solving skills.

Keywords: knowledge management system, generic skills assessment, organizational learning, Model-Driven engineering.

1 Introduction

Model Driven Engineering (MDE) supports the creation of central artifacts in software engineering through the development or automatic execution of software systems based on models [15]. The growing complexity of organizational processes requires tools to automate or partially automate their operations [2]. MDE provides an appropriate paradigm to implement these tools by developing environments that are based on domain models using the specific field terminology. Learning is a continuous process that takes place within the workplace, both for individuals operating in the learning society and for organizations competing in international markets [46]. To be competitive, an MDE-based solution must support organizations to automate learning in the workplace.

Organizational learning can be defined as the advance in the knowledge of organizations that takes place due to experience, which aims to enhance the organization’s proficiency in the relationship between knowledge transfer and dynamic competence [4, 23]. Therefore, organizational learning is crucial for organizations who wish to meet the demands of a competitive and changing environment. Organizational learning implies procedural knowledge, is highly contextualized by the work setting, and focuses on generic skills [12].

Complementing the specific knowledge required in each subject area, generic skills have become a fundamental value in the workplace [31]. Generic skills have been defined as abilities that professionals should be able to perform regardless of their subject area. Currently, many organizations demand that their staff have generic skills as a complement to the specific skills that are expected for their specific expertise. From a formative point of view, supervisors must encourage the performance in generic skills in the organizational learning processes, in both face-to-face and in virtual environments interaction. Unfortunately, measuring the individual’s generic skills performance is not an easy task, especially in web environments where the users are not always visible to their supervisors [7].

Web environments have become the nerve center in many organizations and they now integrate data, processes, business systems and staff to ensure the quality of the final products [17]. To manage their employee’s knowledge, many organizations have adopted web environments based on Knowledge Management Systems (KMSs) due to their features in terms of communication, learning, sharing information, information retrieval and learning functions integration [28]. Although KMSs do not usually provide the supervisors with objective indicators about their employee’s performance in generic skills, KMSs do record most of their users’interactions. Previous studies have collected records from these environments to analyze learning activities [20, 37]. Therefore, is it possible to measure the employee’s generic skills performance based on their interaction with KMSs?

To discover the current knowledge about the topic, this work first presents a systematic mapping study to overview the state of the art (Section 2). Then, an organizational learning process model based on MDE for the assessment of acquired skills is described (Section 3). This model was previously applied in a case study consisting on an authentic learning experience [8]. Although the results were promising, the external validity of the process model was still a challenge because of the reduced sample of learners that was used. Consequently, a broader case study involving 112 users was conducted, which has been analyzed in this paper (Section 4).

2 Literature Review

This literature review has been carried out systematically through a Systematic Mapping Study (SMS). A SMS is a broad review of the primary studies of a specific area that aims to identify some evidence on the subject. This study is based on the guidelines of the methodology proposed by Kitchenham [25] on how to plan, execute and present the results of a literature review in software engineering. In particular, Petersen’s proposal [34] has been used to describe the steps followed to perform this systematic mapping.

2.1 Justification and Research Questions

Evidence obtained from an individual’s interactions within a KMSs could help their supervisors to assess their performance in generic skills. To find the state of the art regarding this issue, this SMS identifies several aspects of the computer-supported assessment of generic skills. The SMS process begins with the definition of the research questions:

2.2 Review Protocol

The revision protocol defines a set of steps to obtain the bibliography for our study, as described in the following subsections.

2.2.1 Search Engines and Search Terms

We used several well-known digital libraries to find papers, as follows: Web of Science, Wiley Online Library, Science Direct, and IEEE Digital Library. To perform the search, some terms related to skill assessment were used in the queries, namely: generic skills, generic compe-tences and assessment. However, when combined with terms such as computer-assisted, virtual environment or knowledge-management system, digital libraries returned very few or no papers. Consequently, these second terms were removed from the search.

To determine whether a retrieved paper should be part of the selection of primary studies, its title, abstract and keywords were read. When this was not enough, we briefly read the full paper, and then read the introduction and the conclusions in more detail.

2.2.2 Selection Criteria

To determine whether a paper obtained in the previous search process should be included in the selected papers, the exclusion criteria was established as follows:

2.2.3 Classification for Data Extraction

To extract the information, the papers have been classified according to the following three aspects: research type, contribution type and research scope.

Research Type

This classification refers to the type of research work carried out by the authors. Among the different approaches for research classification, we followed Wieringa’s proposal [49] as recommended by Petersen [34]:

Contribution Type

The papers are classified according to the type of contribution that they make to the field in which they are developed. The classification is described below:

Research Scope

Finally, the papers are classified according to how the assessment of generic skills is carried out:

2.3 Results

After applying the selection criteria, 30 out of 313 papers were selected (Table 1). In addition, a review of the literature on the most assessed generic competences was found and included in this mapping [45]. This review is used to compare the data on these skills with those obtained in this mapping, which provides evidence to answer the first research question.

Table 1 Classification of papers

RefResearch TypeContribution TypeResearch Scope
[1]proposal of solutiontoolPeer and self-assessment
[3]validation researchmodelSemi-automated assessment tool
[5]evaluation researchtoolPeer and self-assessment
[6]proposal of solutionmodelSupervisor assessment
[10]proposal of solutionframeworkSemi-automated assessment tool
[11]proposal of solutionmethodSupervisor assessment
[13]experience papermethodSupervisor, peer and self-assessment
[14]evaluation researchtoolPeer and self-assessment
[18]experience paperframeworkPeer and self-assessment
[19]proposal of solutiontechniqueSemi-automated assessment tool
[22]validation researchtoolSemi-automated assessment tool
[26]experience papermethodSupervisor assessment
[27]experience papertechniqueSupervisor, peer and self-assessment
[29]experience papermodelSupervisor assessment
[33]validation researchmethodPeer and self-assessment
[30]experience papertechniquePeer and self-assessment
[32]opinion papermodelPeer and self-assessment
[35]evaluation researchmethodSupervisor, peer and self-assessment
[36]validation researchmodelSupervisor assessment
[37]validation researchframeworkSemi-automated assessment tool
[38]experience papermethodPeer and self-assessment
[39]evaluation researchtoolSupervisor assessment
[40]experience papermethodPeer and self-assessment
[42]proposal of solutiontoolSupervisor assessment
[43]evaluation researchtechniqueSupervisor, peer and self-assessment
[44]evaluation researchtoolSupervisor assessment
[47]experience papermethodSupervisor assessment
[48]experience papertoolSupervisor assessment
[50]proposal of solutionmodelSupervisor assessment

Figure 1 shows the classification of papers according to their research scope and research type. Most of the studies describe proposals of solutions, experiences, validations and evaluations, while there are almost no opinions or philosophical papers, which are usually found in research topics of a certain maturity. This, together with the fact of having found few works, confirms the lack of research of this topic in the literature.

images

Figure 1 Distribution of papers by their research scope and their research type.

images

Figure 2 Distribution of papers by their research type and their contribution type.

The classification of papers according to their research scope and contribution type is shown in Figure 2. The most frequent types are methods, models and tools. These methods are bound to supervisor assessment and peer and self-assessment, whereas both models and tools are more frequently considered for supervisor assessment. It should be noted that, in this figure, if there is a paper that makes two contribution types—for example, supervisor assessment and peer and self-assessment—, it is counted once for each type.

2.3.1 Peer and self-assessment

Although peer and self-assessment can reduce the supervisor’s work, this is not always the case. In some of the papers that we have analyzed, this approach is only used in a complementary way to some other form of assessment [27, 43]. In addition, the self-assessment may not fully adjust to the reality of the user’s performance. For example, in [13] significant discrepancies between the grades that were self-assigned by the learners in the analysis skills and the ratings assigned to them by supervisors were detected (i.e., a discrepancy of 55.65%).

2.3.2 Supervisor assessment

Supervisor assessment is applied by using similar instruments to those applied in peer and self-assessment approaches. Scalability is the most frequently problem mentioned by the authors. For example, Hiperion is a recommendation system that supports the design of activities that are tailored to improve each user skills [42]. The main disadvantage of this tool is the length of time that the supervisors are required to dedicate to assign the different achievements and the weight of each grade for each skill in the activities. Using this tool, an individualized assessment of each user and each group of users in a problem-based learning methodology was performed in [26]. However, the authors reported that the necessary workload for the supervisor is quite higher than usual. Similarly, in [11] the supervisors concluded that the effort they made was excessive, despite the good results obtained.

2.3.3 Semi-automated assessment tools

Psychological tests were the among the semi-automated assessment tool found [3]. These solutions required a set of experts (Delphi process) to create the test and, unfortunately, it is not an affordable solution.

Serious games have been applied to the development of skills [10, 22], measuring the user’s performance through a series of indicators. However, serious games are often used for a specific purpose, usually related to specific skills instead of generic skills.

Although there are many studies of serious games in the literature, only two of them assess generic skills.

The user’s activity records for the assessment of generic skills have been used in two proposals. The first is LACAMOLC, which is a Pentaho-based tool that maps the generic skills performance with indicators obtained from Moodle and Google Docs [37]. The second proposal uses the Comprehensive Training Model of Teamwork Competence (CTMTC) [19]. A Moodle web service that facilitates the extraction of information via XML enabling the user’s teamwork to be assessed via their forum interactions. These two papers collect indicators from virtual learning environments, which are applicable to several generic skills. However, both approaches use fixed indicators; that is, supervisors cannot check other indicators in the KMS apart from those provided by the tools.

2.4 Answers to the Research Questions

We are now able to answer to the research questions, as follows:

(Q1) Which generic skills have been assessed with computer support from the user’s activity data in virtual environments?

All of the generic skills defined by the Tuning Educational Structures in Europe Project [24] have been assessed in the studies that we have found. Among them, the most assessed skills are communication (14 papers), teamwork (13 papers) and problem solving (7 papers). A comprehensive list of assessed generic skills is shown in Table 2.

Table 2 Number of papers that assess each generic skill.

Generic SkillPapersGeneric SkillPapersGeneric SkillPapers
Analysis4ICT3Project 
Communication14Interpersonal skills3Management2
Creativity3Foreign language3Research2
Critical thinking4Leadership2Responsibility3
Cultural1Lifelong learning3Self-employment2
Decision making1Planning and time Teamwork13
Entrepreneurship5management3Troubleshooting7

(Q2) Which methods are applied for the assessment of generic skills in virtual environments? (Q2)

Two type of studies were found: on the one hand, those in which the software assists the user in the assessment process; and on the other hand, those in which the software partially automates the assessment. Within the first group, these works are divided into two subgroups depending on who performs the assessment: peer and self-assessment and supervisor assessment.

The electronic rubric is the most frequently used tool in supervisor, peer, and self-assessment. The main problem encountered in supervisor assessment is the increase in the workload [26]. However, if the assessment is delegated to the users, then discrepancies may appear between the grades that are self-assigned and those received through supervisors [13].

Within the semi-automatic assessment tools, five papers were found. Among them, two papers are based in serious games [10, 16] and other two based on the analysis of learning processes [19, 37].

(Q3) Which techniques are applied to assess the users’ generic skills from their interaction with virtual environments?

Within semi-automatic assessment tools, there are two papers that describe formative assessment methods. These papers extract indicators from virtual learning environments for learning record analysis [19, 37].

2.5 Discussion

Several issues were found in the literature when facing the assessment of generic skills. On the one hand are subjectivity issues, because the criteria valid for a supervisor to assess a generic skill may not be valid for another; and on the other hand are scalability issues. In many cases, the supervisor’s workload to achieve the objectives of a course is high, even more so if they also have to generate and assess new tasks to measure the level of performance of their staff in generic skills. Moreover, scalability issues increase in the context of KMSs, especially if they contain a very large number of users.

According to our findings in this SMS, in many proposals the supervisor performs the assessment by using different tools, but these contributions tend to present scalability issues. We also found papers in which the authors try to minimize this workload by combining or replacing the supervisor’s assessment with peer and self-assessment. This partly avoids the scalability issues, but we run into subjectivity issues in some learners’ assessments. Consequently, supervisors may have to revise their learners’ assessments, facing scalability issues again.

Finally, we have found a set of papers that partially automate the assessment of generic skills. On the one hand, we find serious games that emulate a real professional task and, based on the user’s behaviour, they obtain a score that will serve as an indicator of performance in certain skills. These games tend to be very focused on certain non-generic skills and their implementation is often expensive. In addition, the process of extracting grades is not usually automated or integrated with assessment tools, so a manual procedure is required to capture them.

On the other hand, we find papers that partially cover the objectives of our research: assessment of generic skills based on indicators obtained from the records of learning environments. In these papers, the tools provide fixed indicators on specific activities, for instance one of the tool gives the supervisor an indicator to assess learners’ teamwork based on forum interactions. Unfortunately, what the tool provides could be valid for one supervisor in his/her learning context but not for another. If we had a method that provided supervisors with tools to model and design their assessments, then they could try different formulas to retrieve specific indicators of their users’ performance in the KMS, allowing us to refine them to what is considered to be valid for the assessment of a generic skill in a specific context.

3 Organizational Learning Process

To meet the requirements of training and knowledge assessment in organizational environments, an organizational learning process model is proposed. This section describes the model and its implementation.

images

Figure 3 Organizational process model for the assessment of acquired skills.

3.1 Organizational Learning Process Model

This research proposes the organizational learning process shown in Figure 3. This process model includes several roles, and both manual and computer-assisted activities. All of these are aimed at the acquisition and assessment of the participants’ skills in training activities of a given organization. The process model comprises the following sequence of activities:

  1. Identification of training needs and required skills: First, the supervisor designs a specific learning plan. This plan lists the catalog of skills expected for learners. The supervisor maintains the catalog of skills and learning outcomes for the organization by using a specific tool.
  2. Design of learning activities: Subsequently, the supervisor designs the learning activities needed for the training plan by using the features of a KMS. This way, he/she is able to monitor the learning activities that the learners are engaging.
  3. Development of assessment instruments: By using e-assessment systems, detailed feedback-enriched assessment of learners can be supported. Assessment instruments are usually structured in dimensions and sub-dimensions.
  4. Mapping activities to assessment instruments and skills/learning outcomes: Once the assessment instruments have been deployed, the supervisor indicates the skills and learning outcomes that are developed by the learners through each learning activity. Then, it is necessary to define the relationships among the involved activities, the sub-dimensions of the assessment instruments, and the skills and outcomes.
  5. Engagement in formative activities: After setting up the learning environment and the needed configurations for the assessment, the training activities in which the learners are involved are carried out.
  6. Performing manual assessment activities: The supervisor has to proceed with the assessment by analyzing the learning results generated by learners. To perform this step, the supervisor uses the assessment instruments previously created according to the required skills.
  7. Performing computer-assisted assessment activities: The analysis of the learning results generated by the learners may be partially automated by using specific tools developed for these purposes.

3.2 Reference Implementation

To support the activities described in the Organizational Learning Process Model, a reference KMS [8] is proposed. This system is built on top of a well-known Learning Management System (LMS) and a set of open-source tools have been specifically developed. These tools and the process model activities that they support are presented below.

Moodle (activity 2)

To design and deliver learning activities, we have opted for using Moodle as a LMS. We created some specific tools to enrich Moodle with managing assessment instruments, managing skills and analyzing learning activities by extracting desired indicators.

EvalCOMIX (activity 3)

EvalCOMIX is a web service to develop assessment instruments. It provides an API that can be integrated with other e-learning systems [41]. A specific block called EvalCOMIX MD was implemented in PHP and JavaScript to integrate EvalCOMIX with Moodle.

Gescompeval (activity 4)

Gescompeval is a web tool for mapping activities to assessment instru-ments and skills/learning outcomes [21]. It was implemented as a REST web service which provides a read-only API to retrieve these skills and learning outcomes. It was developed in Symfony 2, a PHP framework.

EvalCourse (activity 7)

EvalCourse is a MDE-based tool to perform computer-assisted assessment based on learning analytics [9]. It admits queries written in SASQL, a Domain Specific Language (DSL) that is used to design online learning assessments.

4 Case Study

This case study follows an authentic learning approach based on the simulation of a back-end department of an IT company, in which employees are required to solve information requirements. The employees were actually 112 students of a computer science degree in the University of Cadiz (Spain) who had enrolled on a course on databases in 2017/18 academic course.

4.1 Method

This experience was conducted according to the organizational learning process previously described.

  1. Identification of training needs and required skills: In addition to learning how to use the SQL language, the students are supposed to acquire g eneric skills related to problem solving and planning and time management.
  2. Design of learning activities: The learning activities consist of both theoretical and laboratory sessions. Six handouts had to be submitted at the end of their respective weekly laboratory sessions (2 hours each). Each submission comprised three SQL queries. After the submissions, the supervisor provided the learners with the solutions and then they had to assess the queries of two of their peers before a second deadline. The students must perform their assigned tasks both correctly and on time.
  3. Development of assessment instruments: A rubric to assess learner’s SQL queries was defined, which includes a number of items to check their completeness, correctness and appropriateness:
    • Is the query implemented according to what is requested in the statement and was solved using suitable operators? (Answers: undelivered, 3 or more errors, 2 errors, 1 error, no errors)
    • Does the query return the expected result? (Answers: yes/no) • Is there an error in the query that does not allow the database management system to process it? (Answers: yes/no)
    • Has the task been delivered according to the established rules (naming, format, etc.)? (Answers: yes/no)
  4. Mapping activities to assessment instruments and skills/learning outcomes: The learning activities were focused on developing student’s generic skills, as follows:
    • Problem solving: identify the information requirements of each assignment and design the appropriates queries.
    • Planning and time management: delivering their tasks (query writing and peer-assessments) on time.
  5. Engagement in formative activities: During the laboratory sessions, the learners studied different SQL lessons. As they finished a lesson, they were asked for a related task. These tasks grew in complexity as the lessons progressed.
  6. Performing manual assessment activities: There was no manual assessment by the supervisor. As we discussed in our previous case study [8], the manual assessment is not scalable with a high number of learners.
  7. Performing computer-assisted assessment activities: EvalCourse was used for the computer-assisted assessment. By writing SASQL queries in EvalCourse, the supervisor could easily retrieve objective indicators about the user’s interactions with the KMS.

Table 3 Generic skills scale

GradePerformance
[0–50)Poor
[50–75)Acceptable
[75–100)Good
100Excellent

4.2 Assessment Results

After completing the course, learners should have completed 12 tasks (six SQL submissions and six peer-assessments). To assess their performance in the two generic skills, their grades were considered according to the scale shown in Table 3.

The first aspect to analyze consists on checking if the learners had delivered their tasks on time. For this, the SASQL query is as follows:

Evidence queries_on_time:
   get students show submission in workshop.

This query returns reports showing the number of tasks that each learner delivered on time. The results are shown in Figure 4 (left-hand side). They show that only nine learners (8%) had a poor performance (i.e., less than the half of tasks were delivered on time) whereas 35 learners had an excellent performance.

The second aspect to analyze is whether learners completed their tasks successfully or not. For this, the SASQL query is as follows:

Evidence queries_evaluation:
   get students show evaluations in workshop.

images

Figure 4 Results of generic skills assessment.

The second query returns reports showing the average of the peerassessments received by each learner. The learners used the former rubric to assess their mates. The results are shown in Figure 4 (right-hand side). The results show that 12 learners had a performance between poor or acceptable and only four learners had an excellent performance. The majority of the learners had a good performance (between 75 and 99 out of 100).

4.3 Threats to Validity

It is necessary to consider threats to the validity of our research. On the one hand, to assure the construct validity of the literature review, we have followed guidelines widely recognized in software engineering research. Its internal and external validity is also guaranteed because we analyzed the extracted data using simple techniques without making any projection. However, since the case study was carried out with undergraduate students, we cannot generalize the findings to other types of organizations beyond universities. Further replication of the study is hence required in other contexts.

5 Conclusions

Virtual environments such as KMSs are used in many organizations due to their communication and information management features. KMSs also provide a suitable environment to support learning in the workplace. Learning in the workplace implies planning activities to support the employee’s improvement in both specific and generic skills. Knowing how the employees perform in generic skills can be strategically relevant to detect skills that need to be improved or to place them in the proper position. However, measuring the employees’ performance in generic skills is not easy, especially in virtual environments.

This research begins with a systematic mapping study, which shows how the performance of the user’s skills in this type of environments is being measured. Although some works were found that semi-automate the assessment process, there are still manual tasks in these approaches, which suffer from both scalability and subjectivity issues.

This research proposes an organizational learning process model to support the training activities within organizations, which consists of seven steps that aim to develop the acquisition and assessment of skills. Assessment within the proposed model is supported by EvalCourse, which is a software program that is based on MDE that supports the automatic extraction of indicators from KMS records. Planning, time management and problem solving skills were assessed in a case study conducted with 112 learners through their interaction with the KMS.

The results are promising—an assessment based on objective indicators has been achieved which, if it were not for the use of this type of tools, would not scalable.

Acknowledgments

This work was carried out as part of the VISAIGLE project, funded by the Spanish National Research Agency (AEI) with ERDF funds under grant ref. TIN2017-85797-R.

References

[1] F. Achcaoucaou, L. Guitart-Tarrés, P. Miravitlles-Matamoros, A. Núñez-Carballosa, M. Bernardo, and A. Bikfalvi. Competence assessment in higher education: a dynamic approach. Human Factors and Ergonomics in Manufacturing & Service Industries, 24(4):454–467, 2014.

[2] M. Luz Alvarez, I. Sarachaga, A. Burgos, E. Estévez, and M. Marcos. A methodological approach to model-driven design and development of automation systems. IEEE Transactions on Automation Science and Engineering, 15(1):67–79, 2018.

[3] M. André, M. G Baldoqúin, and S. T. Acuña. Formal model for assigning human resources to teams in software projects. Information and Software Technology, 53(3):259–275, 2011.

[4] L. Argote. Organization learning: a theoretical framework. In Organizational learning, pages 31–56. Springer, 2013.

[5] E. Arno-Macia and C. Rueda-Ramos. Promoting reflection on science, technology, and society among engineering students through an eap online learning environment. Journal of English for Academic Purposes, 10(1):19–31, 2011.

[6] A.A.Aziz,A. Mohamed, N.Arshad, S. Zakaria, and M. S. Masodi. Appraisal of course learning outcomes using rasch measurement: a case study in information technology education. International Journal of Systems Applications, Engineering & Development, 4(1):164–172, 2007.

[7] L. Bailyn. Toward the perfect work place? the experience of home-based systems developers. In Information Technology and the Corporation of The 1990s : Research Studies, pages 410–439. Oxford University Press, 1994.

[8] A. Balderas, J. A. Caballero-Hernández, J. M. Dodero, M. Palomo-Duarte, and I. Ruiz-Rube. Assessment of generic skills through an organizational learning process model. In Proceedings of the 14th International Conference on Web Information Systems and Technologies – Volume 1: WEBIST, pages 293–300. INSTICC, SciTePress, 2018.

[9] A. Balderas, J. M. Dodero, M. Palomo-Duarte, and I. Ruiz-Rube. A domain specific language for online learning competence assessments. International Journal of Engineering Education, 31(3):851–862, 2015.

[10] M. Bedek, S. A. Petersen, and T. Heikura. From behavioral indicators to contextualized competence assessment. In 11th IEEE International Conference on Advanced Learning Technologies (ICALT), 2011, pages 277–281. IEEE, 2011.

[11] J. V. Benlloch-Dualde and S. Blanc-Clavero. Adapting teaching and assessment strategies to enhance competence-based learning in the framework of the european convergence process. In 37th Annual Frontiers In Education Conference-Global Engineering: Knowledge Without Borders, Opportunities Without Passports, 2007. FIE’07., pages S3B–1. IEEE, 2007.

[12] P. C. Candy and R. G. Crebert. Ivory tower to concrete jungle: the difficult transition from the academy to the workplace as learning environments. The Journal of Higher Education, 62(5):570–592, 1991.

[13] A. Carreras-Marín, Y. Blasco, M. Badia-Miró, M. Bosch-Príncep, I. Morillo, G. Cairó-i Céspedes, and D. Casares-Vidal. The promotion and assessment of generic skills from interdisciplinary teaching teams. EDULEARN13 Proceedings, pages 201–207, 2013.

[14] Y. Chang, T. Eklund, J. I Kantola, and H. Vanharanta. International creative tension study of university students in south korea and finland. Human Factors and Ergonomics in Manufacturing & Service Industries, 19(6):528–543, 2009.

[15] A. R. Da Silva. Model-driven engineering: a survey supported by the unified conceptual model. Computer Languages, Systems & Structures, 43:139–155, 2015.

[16] D. Djaouti, J. Alvarez, and J. P. Jessel. Classifying serious games: the g/p/s model. Handbook of research on improving learning and motivation through educational games: Multidisciplinary approaches, pages 118–136, 2011.

[17] J. G. Enríquez, J. M. Sánchez-Begines, F. J. Domínguez-Mayo, J. A. García-García, and M. J. Escalona. An approach to characterize and evaluate the quality of product lifecycle management software systems. Computer Standards & Interfaces, 61:77 – 88, 2019.

[18] P. Ficapal-Cusí and J. Boada-Grau. e-learning and team-based learning: practical experience in virtual teams. Procedia-Social and Behavioral Sciences, 196:69–74, 2015.

[19] A. Fidalgo-Blanco, D. Lerís, M. L. Sein-Echaluce, and F. J. García-Peñalvo. Monitoring indicators for ctmtc: comprehensive training model of the teamwork competence in engineering domain. International Journal of Engineering Education, 31(3):829–838, 2015.

[20] A. Fidalgo-Blanco, M. L. Sein-Echaluce, F. J. García-Peñalvo, and M. A. Conde. Using learning analytics to improve teamwork assessment. Computers in Human Behavior, 47(C):149–156, June 2015.

[21] Gescompeval. https://assembla.com/spaces/inteweb-gescompe val, 2014.

[22] M. Guenaga, S. Arranz, I. Rubio-Florido, E. Aguilar, A. Ortizde Guinea, A. Rayon, M. J. Bezanilla, and I. Menchaca. Serious games for the development of employment oriented competences. IEEE Revista Iberoamericana de Tecnologías del Aprendizaje, 8(4):176–183, 2013.

[23] P. Huang and Y. Guo. Research on the relationships among knowledge transfer, organizational learning and dynamic competence. Ninth Wuhan International Conference On E-Business, I-III:1449–1456, 2010.

[24] Tuning Educational Structures in Europe Project. Generic competences. http://unideusto.org/tuningeu/competences/generic.html, 2003. Accessed: 2019-03-05.

[25] B. Kitchenham, R. Pretorius, D. Budgen, O. P. Brereton, M. Turner, M. Niazi, and S. Linkman. Systematic literature reviews in software engineering–a tertiary study. Information and Software Technology, 52(8):792–805, 2010.

[26] R. Lacuesta, G. Palacios, and L. Fernández. Active learning through problem based learning methodology in engineering edu-cation. In 39th IEEE Frontiers in Education Conference, 2009. FIE’09., pages 1–6. IEEE, 2009.

[27] A. Lasa, I. Txurruka, E. Simón, and J. Miranda. Problem based learning implementation in the degree of human nutrition and dietetics. Proceedings of the International Conference of Education, Research and Innovation (ICERI), pages 1687–1692, 2013.

[28] J. Liebowitz and M. Frank. Knowledge Management and e-Learning. CRC press, 2016.

[29] M. R. Martín-Briceño and S. Prashar. Acquired skills with the implementation of new evaluation methods at University Rey Juan Carlos. Proceedings of the International Conference of Education, Research and Innovation (ICERI), pages 4875–4878, 2013.

[30] A. Masip-Álvarez, C. Hervada-Sala, T. Pa`mies-Gómez, A. Arias-Pujol, C. Jaen-Fernandez, C. Rodriguez-Sorigue, D. Romero-Duran, F. Nejjari-Akhi-Elarab, M. Alvarez-del Castillo, M. Roca-Lefler, et al. Self-video recording for the integration and assessment of generic competencies. In IEEE Global Engineering Education Conference (EDUCON), pages 436–441. IEEE, 2013.

[31] A. M. Nita, I. G. Solomon, and L. Mihoreanu. Building competencies and skills for leadership through the education system. In The International Scientific Conference eLearning and Software for Education, volume 2, page 410. “Carol I” National Defence University, 2016.

[32] B. Oliver. Graduate attributes as a focus for institution-wide curriculum renewal: innovations and challenges. Higher Education Research & Development, 32(3):450–463, 2013.

[33] J. E. Perez-Martinez, J. García-Martín, and A. Sierra-Alonso. Teamwork competence and academic motivation in computer science engineering studies. In IEEE Global Engineering Education Conference (EDUCON), pages 778–783. IEEE, 2014.

[34] K. Petersen, R. Feldt, S. Mujtaba, and M. Mattsson. Systematic mapping studies in software engineering. In EASE, volume 8, pages 68–77, 2008.

[35] N. Piedra, J. Chicaiza, J. López, A. Romero, and E. Tovar. Measuring collaboration and creativity skills through rubrics: experience from UTPL collaborative social networks course. In IEEE Education Engineering (EDUCON), pages 1511–1516. IEEE, 2010.

[36] R. A. Rashid, R. Abdullah, A. Zaharim, H. Ahmad Ghulman, M. S. Masodi, J. L. Mauri, A. Zaharim, A. Kolyshkin, M. Hatziprokopiou, A. Lazakidou, et al. Engineering students performance evaluation of generic skills measurement: ESPEGS model. In Proceedings of the 5th WSEAS International Conference on Engineering Education (EE’08), volume 5, pages 377–383. World Scientific and Engineering Academy and Society (WSEAS), 2008.

[37] A. Rayon-Jerez, M. Guenaga, and A. Núñez. A web platform for the assessment of competences in mobile learning contexts. In IEEE Global Engineering Education Conference (EDUCON), pages 321–329. IEEE, 2014.

[38] M. L. Renau-Renau and J. Usó-Viciedo. Teaching and learning through projects using the ict: practice of the english writing through business documents. Proceedings of the International Conference of Education, Research and Innovation (ICERI) 2010, pages 4700–4705, 2010.

[39] S. Rodriguez-Donaire, B. Amante-García, and S. Oliver-Del-Olmo. e-portfolio: a tool to assess university students’ skills. In 9th International Conference on Information Technology Based Higher Education and Training, pages 114–124. IEEE, 2010.

[40] C. Ruizacárate-Varela, M. J. García-García, C. González-Garía, and J. L. Casado-SÁnchez. Soft skills: a comparative analysis between online and classroom teaching. In Proceedings of the International Conference on Advanced Education Technology and Management Science (AETMS2013), pages 359–366. DEStech Publications, 2013.

[41] M. S. Sáiz-Ibarra, D. Cabeza-Sánchez, A. R. León-Rodríguez, G. Rodríguez-Gómez, M. A. Gómez-Ruiz, B. Gallego-Noche, V. Quesada-Serra, J. Cubero-Ibáñez, et al. Evalcomix en moodle: Un medio para favorecer la participación de los estudiantes en la e-evaluación. RED, Revista de Educación a Distancia. Special number-SPDECE, 2010.

[42] J. Serrano-Guerrero, F. P. Romero, and J. A. Olivas. Hiperion: a fuzzy approach for recommending educational activities based on the acquisition of competences. Information Sciences, 248: 114–129, 2013.

[43] A. Sevilla-Pavón, A. Martínez-Sáez, and A. Gimeno-Sanz. Assessment of competences in designing online preparatory materials for the cambridge first certificate in english examination. Procedia-Social and Behavioral Sciences, 34:207–211, 2012.

[44] A. I. Starcic. Sustaining teacher’s professional development and training through web-based communities of practice. In International Symposium on Applications and the Internet, 2008. SAINT 2008., pages 317–320. IEEE, 2008.

[45] J. Strijbos, N. Engels, and K. Struyven. Criteria and standards of generic competences at bachelor degree level: a review study. Educational Research Review, 14:18–32, 2015.

[46] P. Tynjälä. Perspectives into learning at the workplace. Educational research review, 3(2):130–154, 2008.

[47] C. Vizcarro-Guarch, P. Martin-Espinosa, R. Cobos, J. E. Pérez, E. Tovar-Caro, G. Blanco-Viejo, A. Bermudez-Marin, and J. R. Ruiz-Gallardo. Assessment of problem solving in computing studies. In IEEE Frontiers in Education Conference, 2013, pages 999–1003. IEEE, 2013.

[48] T. Ward and S. Christophe. Developing entrepreneurial accounting and finance competency using the elleiec virtual centre for enterprise. In Proceedings of the 22nd European Association For Education In Electrical And Information Engineering Annual Conference (EAEEIE), pages 1–5. IEEE, 2011.

[49] R. Wieringa, N. Maiden, N. Mead, and C. Rolland. Requirements engineering paper classification and evaluation criteria: a proposal and a discussion. Requirements Engineering, 11(1):102–107, 2006.

[50] F. Yang, F. W. B. Li, and R. W. H. Lau. A fine-grained outcome-based learning path model. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 44(2):235–245, 2014.

Biographies

images

Antonio Balderas received his MSc in Computer Science and his PhD degree at the University of Cadiz. He also works at the University of Cádiz as a Lecturer of the Department of Computer Engineering and as a researcher at the Software Process Improvement and Formal Methods group (SPI&FM). His research is focused on the field of Technology-Enhanced Learning. His doctoral thesis has been awarded as the best doctoral thesis in the field of educational technologies in 2016 by both the eMadrid project and the Spanish Chapter of the IEEE Education Society. Previously, He worked as a project manager in different Spanish IT companies.

images

Juan Antonio Caballero-Hernández is a graduated in Computer Science and a PhD candidate at the University of Cadiz. His main research interest is focused on learning experiences based on serious games and diverse applications of Process Mining. Outside the academic environment, he has been working in different positions of the IT sector, including web development, managing teams and international projects.

images

Juan Manuel Dodero is a Professor of Computer Systems & Languages at the University of Cádiz. He has worked before as a lecturer at the University Carlos III of Madrid, as well as an R&D consultant for Spanish ICT companies. He holds a Computer Science and Engineering PhD from the University Carlos III of Madrid. His main research interests are creative computing and technology-enhanced learning, with a focus on software technologies for computer-aided creation and assessment. He has participated and coordinated in various R&D projects in relation with these subjects.

images

Manuel Palomo-Duarte received the MSc degree in computer science from the University of Seville and the PhD degree from the University of Cádiz. He works in the Computer Science Department of the University of Cadiz as an Associate Professor. He is the author of three book chapters, 20 papers published in indexed journals and more than 30 contributions to international academic conferences. His main research interests are learning technologies, serious games and collaborative web. He was a board member in Wikimedia Spain from 2012 to 2016.

images

Ivá n Ruiz-Rube is an assistant professor at the University of Cadiz, Spain. He received his Master degree in Software Engineering and Technology from the University of Seville and his PhD from the University of Cadiz. His fields of research are technology-enhanced learning and software process improvement. He has published several papers in these fields. Previously, he has worked as a software engineer for consulting companies such as Everis Spain S.L. and Sadiel S.A.

Abstract

1 Introduction

2 Literature Review

2.1 Justification and Research Questions

2.2 Review Protocol

2.2.1 Search Engines and Search Terms

2.2.2 Selection Criteria

2.2.3 Classification for Data Extraction

2.3 Results

images

images

2.3.1 Peer and self-assessment

2.3.2 Supervisor assessment

2.3.3 Semi-automated assessment tools

2.4 Answers to the Research Questions

2.5 Discussion

3 Organizational Learning Process

images

3.1 Organizational Learning Process Model

3.2 Reference Implementation

4 Case Study

4.1 Method

4.2 Assessment Results

images

4.3 Threats to Validity

5 Conclusions

Acknowledgments

References

Biographies