logo

WikiJuanan: CartelJornadas ...

Inicio | Indice De Paginas | Ultimas Modificaciones | Ultimos Commentarios | Usuarios | Registrarse | Conectar:  Contraseña:  
Esto es una version antigua de CartelJornadas de 2005-01-29 20:10:28..


Workshop on 'Causal Inference in the Campbell and Rubin Traditions: Building Bridges'


Faculty of Psychology. University of Sevilla, February, 18–19th.


The main objective of this workshop is to have extensive discussions of the topic. Each presentation will take only 30 minutes in order to have a lively general plenary debate between participants.


In the following you can find presenters and abstracts.


1. D. Rubin: 'Rubin's Causal Model'. It will present what his framework involves and why it has become a standard. Also it will be presented some work where his framework reveals mistakes and respond to criticisms of it.


2. T. Cook 'Struggling with Causal Generalization'. It will describe causal generalization, walk through various approaches to it, and outline that all are inadequate by the highest standards either theoretically or practically. it will seek to put research on causal generalization more centrally onto the scholarly agenda.


3. W. Shadish 'Exploring similarities, differences, agreements, disagreements between models'. This talk will compare and contrast the two models, examining similarities, differences, agreements, and disagreements between them. For example, similarities include an emphasis on causal description more than causal explanation, and a focus on how to improve inferences from nonrandomized experiments. Examples of differences include different (but complementary) causal philosophy (falsificationist vs counterfactual), and the degree of quantification of the models. Examples of agreements include that experimental causes must be manipulable, and that good design is the essential underpinning of good analysis. Examples of disagreements are (potentially) about the conclusions that can be reached when model assumptions are not met, and whether all causes must be manipulable.


4. R. Steyer 'Latent variables and the analysis of individual and average causal effects'. Reviewing the definitions of true-score variables in Classical Test Theory and of latent trait variables in Latent State-Trait Theory it is shown that individual causal effects can be defined as differences between true-scores in a treatment and a control condition, respectively. This invokes that the design and analysis techniques of latent variable modelling can be utilized for models in which individual causal effects are the values of latent variables. Specifically, designs and methods of data analysis are presented which yield not only (a) estimates of the average causal effect of a treatment variable on a response variable in the sense of Rubins approach to causality, but also (b) estimates of the variance of the individual causal effects and (c) of the covariance between pretest and individual causal effects. It is shown how to include exogenous variables in the analysis that (d) explain the interindividual differences in the individual causal effects of the treatment variable on the response variable. All this is based on a specific design of units to the treatment conditions, assessing pretests and introducing some additional assumptions which, however, can be tested in the analysis as well.


5 S. Chacon: 'Empirical study of threats to validity. A first contribution'. (Authors: S. Chacon & P Holgado). In program evaluation practice, there is often not a systematic way to control threats to validity and its consequences in effect size estimations. In this intervention context, Campbell's approach has given a conceptual framework for evaluating main threats to the various kinds of validity. His original work emphasized concepts from philosophy of science and the practical issues confronting social researchers. Nonetheless, there has not been much effort to systematize this conceptual framework, for example, clarifying key concepts such as plausibility, and data from observational designs have been mainly analyzed by methods based on plausibility considerations. This presentation calls for an empirical analysis of validity threats applied to analyze causal effects. Specifically, we are going to present the main objectives of a research project on this topic and a first contribution in relation to one threat to statistical conclusion validity.


Organizadores: Grupo de Investigación ‘Innovaciones Metodológicas en Evaluación de Programas (HUM-649)’


Financian (funders):


No hay archivos en esta página. [Enseñar archivos/formulario]
No hay comentarios en esta pagina. [Enseñar comentarios/formulario]