SAC 2015 is offering 8 half-day tutorials on Monday April 13, 2015. Tutorials are open for those who registered for them. Handouts will be available online right before the conference. No printed handouts are provided during the tutorials. Please bring your copies of the handouts (printed or electronic). Lunch tickets will be issues for registered attendees. For questions or inquiries about the tutorials, please contact the Tutorials Chair

Dr. Sara Rodriguez
University of Salamanca
Salamanca, Spain
srg@usal.es

Schedule:

 
Room A
Room B
Room C
Room D
10:00am


T#1:
Interaction Design for Specifying Requirements

Professor Hermann Kaindl
Vienna University of Technology, ICT
Vienna, Austria

AM Coffee Break
11:30 - 12:00


T#2:
Security of Web Applications and Browsers: Challenges and Solutions

Dr. Hossain Shahriar
Kennesaw State University
Kennesaw, Georgia, USA

AM Coffee Break
11:30 - 12:00


T#3:
Middleware and Healthcare Apps for Internet-of-Things (IoT)

Dr. José Cecílio, Ms. Karen Duarte,
and Dr. Pedro Furtado
University of Coimbra, Coimbra
Portugal

AM Coffee Break
11:30 - 12:00


13:30pm
Social Luncheon for attendees who registered for the Tutorials.
The luncheon event will be held at the conference venue, and lunch tickets will be issued.
15:30pm


T#4:
Synchronization is coming back but is it the same?

Dr. Michel Raynal
Universit'e de Rennes 1
Rennes Cedex, France

PM Coffee Break
17:00 - 17:30


T#5: Software Reuse and Reusability Involving Requirements, Product Lines, and Semantic Service Specifications

Professor Hermann Kaindl
Vienna University of Technology, ICT
Vienna, Austria


PM Coffee Break
17:00 - 17:30


Canceled!

T#6: Data Intensive Computing: Algorithms and Tools

Dr. Laura Ricci
University of Pisa,
Pisa, Italy

PM Coffee Break
17:00 - 17:30


T#7: Prevalence Estimation in Information Retrieval, Machine Learning, and Data Mining

Dr. Fabrizio Sebastiani
Qatar Computing Research Institute
Doha, Qatar

PM Coffee Break
17:00 - 17:30


Tutorials Details:

Monday April 13, 2015, 10:00am - 1:30pm (Coffee Break: 11:30am - 12:00noon)

T#1: Interaction Design for Specifying Requirements

Handout: T1-Handout
(copyrighted materials. The copyright belongs to the tutorial presenter unless otherwise stated)

Abstract:
When the requirements and the interaction design of a system are separated, they will most likely not fit together, and the resulting system will be less than optimal. Even if all the real needs are covered in the requirements and also implemented, errors may be induced by human-computer interaction through a bad interaction design and its resulting user interface. Such a system may even not be used at all. Alternatively, a great user interface of a system with features that are not required will not be very useful as well.

This tutorial explains joint modeling of (communicative) interaction design and requirements, through discourse models and ontologies. Our discourse models are derived from results of human communication theories, cognitive science and sociology (even without employing speech or natural language). While these models were originally devised for capturing interaction design, it turned out that they can be also viewed as specifying classes of scenarios, i.e., use cases. In this sense, they can also be utilized for specifying requirements. Ontologies are used to define domain models and the domains of discourse for the interactions with software systems. User interfaces for these software systems can be generated semi-automatically from our discourse models, domain-of-discourse models and specifications of the requirements. This is especially useful when user interfaces for different devices are needed. So, interaction design facilitates requirements engineering to make applications both more useful and usable.

Bio:
Hermann Kaindl joined the Institute of Computer Technology at the Vienna University of Technology in early 2003 as a full professor, where he also serves in the Senate. Prior to moving to academia, he was a senior consultant with the division of program and systems engineering at Siemens AG Austria. There he has gained more than 24 years of industrial experience in software development and human-computer interaction. He has published five books and more than 170 papers in refereed journals, books and conference proceedings. He is a Senior Member of the IEEE, a Distinguished Scientist member of the ACM, a member of the AAAI, and is on the executive board of the Austrian Society for Artificial Intelligence.

He has previously held tutorials at CAiSE’00, RE’01, RE’02, HICSS-36, INCOSE’03, RE’03, CADUI-IUI’04, INCOSE’04, RE’04, HICSS-38, IRMA’05, INCOSE’05, AAAI’06, HCI’06, OOPSLA’06, HICSS-40, ICONS’07, INCOSE’07, AAAI’07, IFIP Interact’07, OOPSLA’07, HICSS-41, ICCGI’08, RE’08, ICSEA’08, ICIW’09, IFIP Interact’09, SMC’09, HICSS-43, ACHI’10, ACM EICS’10, ICSEA’10, TdSE’10, HICSS-44, ACM SAC’11, INCOSE’11, AAAI’11, RE’11, ICSEA’11, HICSS-45, ACM SAC’12, ACM CHI’12, PROFES’12, BCS HCI’12, IEEE APSEC'12, HICSS-46, ACM SAC’13, NexComm’13, PROFES’13, ICSOFT’13, IEEE Africon’13, IEEE APSEC’13, HICSS-47, ACM SAC’14 and WEB’14.

Several of these tutorials were related to the one proposed here, most strongly the one at HICSS-47. Note, that this tutorial is about more recent and more advanced material than the related one I gave at SAC’12. In addition, Hermann Kaindl organized and chaired several panels at major conferences, such as the one at CHI 2001 “Methods and Modeling: Fiction or Useful Reality?”, as well as the one at RE’08 entitled “How to Combine Requirements Engineering and Interaction Design?”.

Dr. Hermann Kaindl, Professor
Vienna University of Technology, ICT
Gusshausstr. 27-29
A-1040 Vienna, Austria
Email: kaindl @ ict.tuwien.ac.at
Web: http://www.ict.tuwien.ac.at/kaindl


T#2: Security of Web Applications and Browsers: Challenges and Solutions

Handout: T2-Handout
(copyrighted materials. The copyright belongs to the tutorial presenter unless otherwise stated)

Abstract:
We rely on web applications to perform many useful activities. Despite the awareness have raised since the past decade on vulnerabilities commonly discovered in the implementation of web applications, we still observe the presence of known vulnerabilities today. Worse that the browser platforms are posing extra challenges through extended functionalities or light weight extension applications that can not only access data from visited webpages and local machines, but also transfer them to remote hosts controlled by hackers. Given that a solid understanding of both application and browser platform security is essential to tame the unsecured web.

In this tutorial we will provide an overview of some common vulnerabilities for web applications and browsers, followed by some common techniques useful to combat against security threats. In particular, we will discuss some well-known implementation level vulnerabilities (e.g., SQL Injection, Cross-Site Scripting, Clickjacking) along with a popular mitigation approach known as security testing. We then focus our discussion on the browser platform and explore some of its supported features for extension applications with their capabilities. We will highlight vulnerabilities arise from extensions followed by exploring malware extensions. Finally, we discuss some practices to securely implementing browser extensions and combat against malware extensions.

Bio:
Dr. Hossain Shahriar is currently an Assistant Professor of Computer Science at Kennesaw State University, Georgia, USA. His research interests include software security, web application security, software testing, mobile application security, and malware analysis. Dr. Shahriar is an expert on application security testing with extensive publications and industry experience. His research has attracted a number of awards including IEEE DASC 2011 Best Paper Award, Outstanding PhD Research Achievement Award 2011, and IEEE Kingston Section Research Excellence Award 2008. Dr. Shahriar presented tutorials in ACM SAC 2011 and IEEE ISSRE 2012, and has been invited to present a tutorial on web application security issues in ACM/SIGSAC SIN 2013. He has served as PC member in various international conferences related to computer and software security such as ACM SAC 2014 (Computer Security Track), ACM/SIGSAC SIN 2014, and IEEE ITNG 2014. He is also serving as an associate editor of the International Journal of Secure Software Engineering. Dr. Shahriar is currently a member of the ACM, ACM SIGAPP, and IEEE.

Dr. Hossain Shahriar
Department of Computer Science
Kennesaw State University
Kennesaw, GA 30144, USA
Email: hshahria@kennesaw.edu
Website: http://cs.kennesaw.edu/hshahria


T#3: Middleware and Healthcare Apps for Internet-of-Things (IoT)

Handout: T3-Handout
(copyrighted materials. The copyright belongs to the tutorial presenter unless otherwise stated)

Abstract:
We live in an exciting time for lovers of lego-like sensing devices and remote operation. The availability of off-the-shelf sensors and wireless sensor nodes have increased dramatically over the last years, while their price has decreased also dramatically. Today anyone with a reasonable expertise in programming and a love for sensor technology can buy, for example, a Raspberry and some sensors, go to the internet to learn how to build and configure them, and put them to work in some simple application. Wireless Sensor Networks (WSN) are networks of such nodes, and WSN middleware allows programming and operating such networks.

The vision of the Internet-of-Things (IoT) is a related but more global concept. It is one of the recent technological and social trends that will have a significant impact in the delivery of healthcare. The internet extends to the real world, everything becoming interconnected and having a digital entity [3], [4], [5]. Everyday objects will have the capability of directly interacting with each other and with humans [4], [6].

In this tutorial we first review middleware for WSN and architectures of IoT. Then we present the design of an interoperable and heterogeneity-handling middleware for both WSN and for IoT.

Once we have presented the middleware solutions for WSN and IoT, we will describe how these concepts are applied in two applications related to healthcare. Healthcare is nowadays one important topic in all governments and political agendas [1]. This can be explained by the increasing ageing of the population, higher number of people living with long-term conditions and the growing demand for more advanced healthcare and new medical technologies [2]. In healthcare systems, IoT enables the patient to stay longer and safer at home, since smart devices can alarm the hospital in case of critical conditions. Furthermore, due to constant monitoring, the patient can be relieved from the hassle of routine checks, replacing costly travel and reducing patient stress [8]. Using implantable wireless devices to store health records could save a patient's life in emergency situations [4], [8].

Several other studies have demonstrated that IoT is an enabler with the potential to greatly affect and improve the quality of healthcare [9], [10], [11], [12], [13], [14]. The work in [12] proposes an obstacle detection system based on ultrasonic sensors that can be added to a cane that helps blind people to find their way in an unfamiliar area. A multiple sensor-based shoe-mounted sensor interface is also studied in [3] as a supplementary device to the cane, while the authors of [14] propose a system based on RFID and GPS that helps visually impaired in their navigation/motion activities’. Concerning paralyzed patients in an hospital, IoT can be used to alert nurses and caregivers for example, to replace the diapers as soon as they become wet [15].

We will pick two specific projects to apply the concepts of IoT middleware, one related to indoor navigation of blind people to find products and services, the other related to applying Brain-Computer-Interfaces to help people with disabilities. We will describe the architecture and middleware developed, and how they work.

Bios:
José Cecílio is a senior researcher at the Centre for Informatics and Systems of the University of Coimbra. He received his graduation and M.Sc. in Electrical and Computer Engineering in 2006 and 2008, both from the University of Coimbra. José Cecílio also holds the Ph.D. degree in computer science from the University of Coimbra (2013). His main research interests are in the areas of Internet of things, embedded systems, distributed systems and communication systems, with focus on embedded devices for health care, wireless networks, vehicular networks. He has a vast bibliography which includes 2 full books and several research collaborations with both industry and academia. He actively participated in several industrial projects related with automation and robotics, and he is a licensed Professional Engineer.

Karen Duarte is a researcher at the Centre for Informatics and Systems of the University of Coimbra. She received her MSc in Biomedical Engineering by the University of Coimbra in 2014. Her main research interests are in assistive technologies and systems oriented to help blind people.

Pedro Furtado is Professor at University of Coimbra UC, Portugal, where he teaches courses in both Computer and Biomedical Engineering. Pedro has more than 25 years of experience in both teaching, doing research and supervising industry projects. As part of his work, he has supervised more than 50 Software Engineering projects in different industries, with some emphasis on telecommunications-related projects. His main research interests are on performance and scalability qualities of systems. Pedro applied these qualities in data warehousing, bigdata, analytics, data mining, cloud, IoT and realtime systems. He has more than 150 papers published in international conferences and journals, books published and several research collaborations with both industry and academia. In the last years, Pedro has spent some time as visiting scholar in some of the most prestigious universities in the world. Besides a PhD in Computer Engineering from U. Coimbra (UC) (2000), Pedro Furtado holds an MBA from Universidade Catolica Portuguesa (UCP) (2004).

Dr. Jose Cecilio
University of Coimbra, Coimbra, Portugal

jcecilio@dei.uc.pt
https://eden.dei.uc.pt/~jcecilio

Ms. Karen Duarte
University of Coimbra, Coimbra, Portugal
uc2009114194@student.fis.uc.pt

Dr. Pedro Furtado
University of Coimbra, Coimbra, Portugal
pnf@dei.uc.pt
https://eden.dei.uc.pt/~pnf


Monday April 13, 2015, 3:30pm - 7:00pm (Coffee Break: 5:00pm - 5:30pm)

T#4: Synchronization is coming back but is it the same?

Handout: T4-Handout
(copyrighted materials. The copyright belongs to the tutorial presenter unless otherwise stated)

Abstract:
Informally, "wait-free" means that the progress of a process depends only on itself. This notion is more and more pervasive in a lot of problems that basically rely (in one way or another) on the definition and the use of concurrent objects in presence of failures. This tutorial will visit wait-free computing: its underlying concepts and its basic mechanisms. To that end, the lecture will also visit fundamental problems of asynchronous computing in presence of failures such as renaming, set agreement, collect, snapshot, etc. It will also present fundamental notions related to the implementation of concurrent objects, such as t-resilience and graceful degradation.

The literature on this topic is mostly technical and appears mainly in theoretically inclined journals and conferences. The aim of this tutorial is to provide an introductory survey to the new synchronization concepts that have been introduced in the recent past. The tutorial is destined to the people who are not familiar with these concepts and want to quickly understand their aim, their basic principles, their power and limitations. The tutorial will adopt an algorithmic approach to explain these new concepts. From a practical point of view, the advent of multicore architecture makes this topic central for researchers and engineers whose main interests lie in distributed fault-tolerance and dependability for shared memory systems. Moreover, whatever the problem they have to solve, one aim of the tutorial is to enlarge the knowledge and the background of researchers and engineers whose main interest is dependability.

Bio:
Michel Raynal has been a professor of computer science since 1981. At IRISA (CNRS-INRIA-University joint computing research laboratory located in Rennes), he founded a research group on Distributed Algorithms in 1983. His research interests include distributed algorithms, distributed computing systems, networks and dependability. His main interest lies in the fundamental principles that underly the design and the construction of distributed computing systems. He has been Principal Investigator of a number of research grants in these areas, and has been invited by many universities all over the world to give lectures on distributed algorithms and distributed computing. He has supervised more than 45 PhD students, and his
h-index bypasses 50.

Professor Michel Raynal has publ
ished more than 130 papers in journals (Journal oh the ACM, Algorithmica, Acta Informatica, SIAM Journal of Computing, Distributed Computing, Comm. of the ACM, Information and Computation, Journal of Computer and System Sciences, JPDC, IEEE Transactions on Computers, IEEE Transactions on SE, IEEE Transactions on KDE, IEEE Transactions on TPDS, IEEE Computer, IEEE Software, IPL, PPL, Theoretical Computer Science, Real-Time Systems Journal, The Computer Journal, etc.). He has also published more than 280 papers in conferences (ACM STOC, ACM PODC, ACM SPAA, IEEE ICDCS, IEEE DSN, DISC, COCOON, IEEE IPDPS, Europar, FST&TCS, IEEE
SRDS, etc.).

Michel Raynal has written 10 books devoted to parallelism, distributed algorithms and systems (MIT Press and Wiley). In the recent past, he has written two books devoted to fault-tolerant distributed systems, both published by Morgan & Claypool, \Communication and Agreement Abstractions for Fault-Tolerant Asynchronous Distributed Systems" (June 2010) and Fault-tolerant Agreement in Synchronous Message- passing Systems" (September 2010). His book on fault-tolerant synchronization in shared memory systems, titled \Concurrent Programming: Algorithms, Principles, and Foundations" (515 pages) has been published very recently in early 2013, 500 pages). His last book book titled \Distributed Algorithms for Message-Passing Systems" (July 2013) is devoted to the basic algorithmic knowledge on distributed computing that students should master at the end of their Master degree.

Professor Michel Raynal has been an invited speaker in more than 20 international conferences (including the prestigious DISC, Europar, ICDCN, SIROCCO, OPODIS and NCA conferences). He belongs to the editorial board of several international journals (including JPDC, IEEE TC, JCSSE and FDCS). He has served in program committees for more than 150 international conferences (including ACM PODC, DISC, ICDCS, IPDPS, DSN, LADC, SRDS, SIROCCO, etc.), and chaired the program committee of more than 25 int'l conferences (including DISC -twice-, ICDCS, ICDCN, OPODIS, SIROCCO and ISORC). He has also been general chair of several major conferences.

Moreover, Michel Raynal served as the chair of the steering committee leading the DISC symposium series in 2002-2004, and was a member of the steering committees of ACM PODC (ACM Symposium on the Principles of Distributed Computing) and SIROCCO (Colloquium on Structural Information and Communication Complexity). He is currently member of the steering committees of ICDCN (Int'l Conference on Distributed Computing and Networks) and IEEE ICDCS (Int'l Conference on Distributed Computing Systems). He is also the European representative in the IEEE technical committee on Distributed Computing. Michel Raynal received the IEEE ICDCS \Best Paper" award three times in a row: 1999, 2000 and 2001. He also received the \Best Paper" award at the Int'l conference SSS 2009 and SSS 2011, the \Distinguished Paper" award at EUROPAR 2010, the \Best Paper" award at DISC 2010, and the \Best Paper" award at ACM PODC 2014. Since 2010, Michel Raynal is a senior member of the very prestigious "Institut Universitaire de France".

More information can be obtained at http://www.irisa.fr/prive/michel.raynal/ or, as far as publications are concerned, from DBLP, CiteSeer, or any other system.

Dr. Michel Raynal
Institut Universitaire de France & IRISA
Universit'e de Rennes 1, Campus de Beaulieu
35042 Rennes Cedex, France
Email: raynal@irisa.fr


T#5:
Software Reuse and Reusability Involving Requirements, Product Lines, and Semantic Service Specifications

Handout: T5-Handout
(copyrighted materials. The copyright belongs to the tutorial presenter unless otherwise stated)

Abstract:
Software reuse and reusability are often just addressed at the level of code or low-level design. In contrast, this tutorial explains them starting from requirements. It integrates and presents three approaches co-developed by the presenter over more than a decade, which also involve product line technology, case-based reasoning and semantic service specification. One approach deals with requirements reuse and reusability in the context of product lines. It makes the relations among product line requirements explicit, so that single system requirements in this product line can be derived consistently. A key issue is commonality and variability across different products. This tutorial shows how requirements for a product line can be modeled, selected and reused to engineer the requirements for innovative new products. Another approach for software reuse involves case-based reasoning. Instead of explicit relations between requirements (or other artifacts), similarity metrics are employed for finding the most similar software case in a repository to a given set of requirements. This even works when a single envisioned usage scenario is specified yet, and it allows reusing also requirements from retrieved cases. The major point, however, is to facilitate reusing software design (including architecture) and code from similar software cases. In fact, these two approaches can be usefully combined. Yet another approach involves semantic service specification, which facilitates automated generation of service composition. In the context of business software reuse and reusability, this formal specification facilitates automated verification, and validation including business rules as well. These approaches have different key properties and trade-offs between costs of making software artefacts reusable and benefits for reusing them. These will be particularly explained in this tutorial.

Bio: Please see T#1 above for presenter's Bio.

Dr. Hermann Kaindl, Professor
Vienna University of Technology, ICT
Gusshausstr. 27-29
A-1040 Vienna, Austria

Email: kaindl @ ict.tuwien.ac.at
Web: http://www.ict.tuwien.ac.at/kaindl


T#6: Data Intensive Computing: Algorithms and Tools (Canceled)

Handout: T6-Handout
(copyrighted materials. The copyright belongs to the tutorial presenter unless otherwise stated)

Abstract:
Both the research and the industrial communities currently recognize the importance of data-driven decision-making. The process of collecting, storing, managing and extracting knowledge from massive data is often referred as "Big Data Processing". For realising the full potential of big data processing, traditional supports and tools are no more suitable. New technologies are emerging to make data analytics scalable and efficient. The new approaches exploit the power of distributed infrastructures and "shared nothing architectures" to render the way data is managed and analyzed. The tutorial will start presenting distributed storage systems for big-data. Then, the most important frameworks for large scale data intensive computing will be presented: data parallel frameworks (MapReduce, ....), graph centric frameworks (Pregel,....) and stream processing frameworks (Aurora,...).

Bio:
Laura Ricci is a Professor of the Department of Computer Science, University of Pisa where she has taught several courses in the area of Computer Networks. She is a Research Associate of ISTI CNR, Pisa, where she has collaborated to international projects in the area of cloud, high performance and P2P computing. She is the co-chair of the LSDVE workshops series, Large Scale Distributed Virtual Environments on Cloud and P2P, held in conjunction with EUROPAR. She has been the guest editor of several special issues in international journals and has chaired workshops in International Conferences. Laura Ricci is author of more than 90 papers published in in refereed journals, books and conference proceedings. Her main research interests are in the area of distributed computing, in particular cloud, P2P and data intensive computing.

Dr. Laura Ricci
Department of Compute Science
University of Pisa
Pisa, Italy
E-mail: ricci@di.unipi.it
Web: http://www.di.unipi.it/~ricci/


T#7:
Prevalence Estimation in Information Retrieval, Machine Learning, and Data Mining

Handout: T7-Handout
(copyrighted materials. The copyright belongs to the tutorial presenter unless otherwise stated)

Abstract:
In recent years it has been pointed out that, in a number of applications involving classification, the general goal is not determining which class (or classes) individual unlabelled data items belong to, but determining the prevalence (or \relative frequency") of each class in the unlabelled data. The latter task is known as quantification (or prevalence estimation, or class prior estimation). Assume a market research agency runs a poll in which they ask the question "What do you think of the recent ad campaign for product X?" Once the poll is complete, they may want to classify the resulting textual answers according to whether they belong or not to the class LovedTheCampaign. The agency is likely not interested in whether a specific individual belongs to the class LovedTheCampaign, but in knowing how many respondents belong to it, i.e., in knowing the prevalence of the class. In other words, the agency is interested not in classification, but in prevalence estimation. Prevalence Estimation is thus akin to classification evaluated at the aggregate (rather than at the individual) level. The research community has recently shown a growing interest in tackling prevalence estimation as a task in its own right. One of the reasons is that, since the goal of prevalence estimation is different than that of classification, prevalence estimation requires evaluation measures different than those used for classification.

A second, related reason is that, as it has been shown, using a method optimized for classification accuracy is suboptimal when quantification accuracy is the real goal. A third reason is the growing awareness that prevalence estimation is going to be more and more important; with the advent of big data, more and more application contexts are going to spring up in which we will simply be happy with analyzing data at the aggregate (rather than at the individual) level. The goal of this tutorial is to introduce the audience to the problem of prevalence estimation, to the techniques that have been proposed for solving it, to the metrics used to evaluate them, to its applications infields as diverse as information retrieval, machine learning, and data mining, and to the problems that are still open in the area.

Bio:
Fabrizio Sebastiani has been a Principal Scientist at QCRI-QF since July 2014; from March 2006 to June 2014 he has been a Senior Researcher at Istituto di Scienza e Tecnologie dell'Informazione, Consiglio Nazionale delle Ricerche, Italy, from which he is currently on leave; before February 2006 he was an Associate Professor at the Department of Pure and Applied Mathematics of the University of Padova, Italy. His main current research interests are at the intersection of information retrieval, machine learning, and human language technologies, with particular emphasis on text classification, information extraction, opinion mining, and their applications. He is an Associate Editor for ACM Transactions on Information Systems (ACM Press) and AI Communications (IOS Press), and a member of the Editorial Boards of Information Retrieval (Kluwer) and Foundations and Trends in Information Retrieval (Now Publishers); of the latter he is also a past co-Editor-in-Chief. He is also a past member of the Editorial Boards of the Journal of the American Society for Information Science and Technology (Wiley), Information Processing and Management (Elsevier), and ACM Computing Reviews (ACM Press).

He is the Editor for Europe, Middle East, and Africa, of Springer's Information Retrieval" book series. He has been the General Chair of ECIR 2003 and SPIRE 2011, and a Program co-Chair of SIGIR 2008 and ECDL 2010; he is the appointed General co-Chair of SIGIR 2016. From 2003 to 2007 he has been the Vice-Chair of ACM SIGIR. He has given several tutorials at international conferences (among which ECDL 1997, ECDL 1998, ER 1998, WWW 1999, ECDL 2000, COLING 2000, IJCAI 2001, ECDL 2001, ECIR 2014, EMNLP 2014) and summer schools (among which ESSLLI
2003 and ESSIR 2005) on themes at the intersection of machine learning and information
retrieval.

Dr. Fabrizio Sebastiani
Qatar Computing Research Institute
Qatar Foundation
Doha, Qatar
Email: fsebastiani@qf.org.qa