Thursday 24 April 2014

What to do when you get results that don't make sense

A few different recent conversations and this blogpost on list experiments by Andrew Gelman have made me think about the nature of the file drawer problem.

Gelman quotes Brendan Nyhan
I suspect there’s a significant file drawer problem on list experiments. I have an unpublished one too! They have low power and are highly sensitive to design quirks and respondent compliance as others mentioned. Another problem we found is interpretive. They work best when the social desirability effect is unidirectional. In our case, however, we realized that there was a plausible case that some respondents were overreporting misperceptions as a form of partisan cheerleading and others were underreporting due to social desirability concerns, which could create offsetting effects.

and Lynn Vavreck:
Like the others, we got some strange results that prevented us from writing up the results. Ultimately, I think we both concluded that this was not a method we would use again in the future.
Many of the commenters on the blog said that the failure to publish on these results reflected badly on these researchers and that they should publish these quirky results to complete the scientific record. 

Both of these examples as well as many other stories I've heard make me think that the major causes of the file drawer effect in social science are not null results but inconclusive, messy and questionable results. The key problem is when you get a result from an analysis that makes you reassess some of the measurement assumptions that you were working with. For instance, a secondary correlation with a demographic variable comes out in an unexpected direction or the distribution of the responses is bunched up in 3 places on a 10 point scale.

The problem comes down to this. If I design a survey or other study to test an empirical proposition, the study is likely not to be ideally designed to test the validity of the measures involved and how the design effects are impacting them.

The results you get from a study designed to test an effect are often enough to cast doubt on validity but rarely are enough to demonstrate the lack of validity in a convincing way (i.e. that would be of publishable quality). The outcome is therefore that the paper can either be written up as a poor substantive article (i.e. the validity of the measures is in doubt or a poor methodological article (the evidence about the validity of the measures is weak either way because the study wasn't designed to be a test of the measure's validity).

One answer to this is to do more pre-testing. This can help to establish the validity of measures prior to working with them and can certainly identify the most obvious problems. However, unless the pre-test is nearly as large as the actual sample, the correlations with other variables won't be particularly clear in advance. In addition, pre-testing won't help understand design effects unless it tests different combinations.

However, what is really needed is whole studies devoted to examining design effects experimentally and establishing the measurement attributes. But until that happens for methods such as list experiments, researchers will be stuck with questionably valid results that are hard to publish as good empirical or methodological pieces.

A more radical approach would be to encourage journals of ideas that didn't quite work out. Short research articles that explain why the idea should have worked out nicely but ended up being a damp squib. These would be useful for meta-analysis of why certain techniques are problematic in practice without having the same time requirements for writing up as a full methodological piece.


Wednesday 12 February 2014

Something every multi-national survey should do


This table is from the Afrobarometer website. You would think that every cross-national survey would have a page like this but I've never managed to track one down on some surveys I'll not mention, while others hide it so effectively it might as well not be there.

Your site can have as much on it as you like but before anything else the following three things should be linked prominently from the front page:

  1. A table like the one above
  2. Links to the original questionnaires for each country
  3. Links to the raw data files, preferably as completely merged as possible


Monday 3 February 2014

When working in different time zones is a good thing

I've recently been working with a team in a time zone seven hours away from mine. While this type of collaboration can often be a frustrating process of missed meetings and miscommunications, it has actually been surprisingly effective. In fact the vast majority of difficulties we have encountered have occurred due to the lack of being able to hold extended face to face meetings. However, given that everyone involved is spread across four locations this would have not been regular even if we were all in the same time zone.

So why has the time difference worked well in this instance? I think part of the success has been due to the type of work we are doing. We are currently writing a document together that requires revisions and reactions from team members. Generally, reading, commenting on and revising the document takes around three hours.

However, as anyone who has worked with multiple contributors knows, multiple people commenting at  the same time leads to a mess of incompatible suggestions and versions. This means you either have to deal with the mess of reconciling incompatible document versions or wait until each person finishes with their changes before someone else starts.

Being in different time zones actually improves this process because each side of the team tends to wake up to a new draft from the other side without having to spend working time waiting for it to arrive.

The key factor here is that the ideal length of independent work time (on this project) is fairly close in size to the time zone difference. If the ideal independent work time was a lot shorter, then there would be large stretches of underutilised time waiting for the other side to do their work. In this case, being in the same time zone would work much better.

The stretches of projects where long meetings are necessary fit better within a single time zone as they involve a continuous conversation backwards and forwards that gets greatly slowed down if each message has to wait a day for a reply.

Similarly, if the ideal length of independent work time was a lot longer than the time zone difference, then time zone wouldn't matter at all. For instance, the time zone differences between me and the journals I submit to is not at all relevant because the conversation with editors and reviewers tends to take place on the scale of weeks and months.

So what could this anecdote mean for the wider world? Well, it might suggest that we would expect information industries to be spaced around the world according to the size of tasks that are required to be done at each stage of a process. For instance, translation or editing agencies would be well placed several hours away from the sources of demand for these services.  An Indian editing firm might be able to offer overnight editing services for US firms.

I'd be interested to hear if anyone has looked into this more generally or if this mechanism holds outside of the particular case here.


Caveats:
  •  I'm explicitly saying that the success of cross-time zone working is dependent on the type of task being performed and its requirement for parallel versus serial processing. I don't think that time zone differences are preferable in all cases.
  • As usual this is written as a piece of speculation to open a debate and is clearly not a piece of carefully researched academic writing

Friday 10 January 2014

Is it really that much more expensive to live in London versus Los Angeles?

An interesting project has been popping up on my news feed today. It's a new tool from expatistan comparing the cost of living in different cities (mainly from the perspective of the footloose professional class).

Comparing the two cities I've lived in most recently gave an interesting result:




The site says that it's designed as a tool to help you decide between different job offers around the world. You know your potential salaries and want to know which will give you a better standard of living. 

The standout category in this comparison is transportation with Los Angeles 54% cheaper than London. 

This is calculated by taking the average percentage difference to buy a:
  • Volkswagen Golf 2.0 TDI 140 CV 6 vel. (or equivalent), with no extras, new
  • 1 liter (1/4 gallon) of gas
  • Monthly ticket public transport
  • Taxi trip on a business day, basic tariff, 8 Km. (5 miles)

The car in London is actually 10% cheaper than in Los Angeles. However, London's petrol/gas is 55% more expensive. This is a fair point and reflects the much higher fuel tax in the UK. 

However, the monthly ticket on public transport is where the difference really kicks in. LA's monthly ticket comes out as the equivalent of £41 compared with £127 for London's. 

But that difference isn't the arbitrary choice of Transport for London. It actually represents the best reason to prefer London to LA. In most of London you can take public transport anywhere in fairly little time. Spending £127 means that you don't need to buy a Volkswagen Golf or a liter of gas at all. It's rarely worth owning a car if you live in central London. 

In LA, public transport is simply not a substitute for car ownership. The £41 doesn't pay for anywhere near as much transport in LA. 

And that's not even getting into the largest single expense for many US household: healthcare. 

Saturday 4 January 2014

Why emotional intelligence leads to poorer job performance: a hypothesis

The Atlantic has an interesting summary of the recent literature focusing on the negative effects of emotional intelligence. Essentially, emotional intelligence not only allows for better interpersonal relations and cooperation but also a greater ability to manipulate others.

One of the most interesting examples that the article gives is that, in non-emotional work (data analysis or car repair rather than counselling or teaching), there is actually a negative correlation between emotional intelligence and job performance (see here for the review article). The Atlantic article proposes that emotional intelligence distracts people from their work in these types of jobs: people spend their time reading their colleagues rather than their spreadsheets.

I have an alternate explanation that should probably be considered. While emotional intelligence may not make you better at low emotion jobs, it probably makes you more likely to be promoted or hired (conditional on prior job performance). If this is the case, then the negative correlation is simply the result of selection into jobs on the basis of emotional intelligence (due to bosses liking the employee or good interview performance).

Essentially a person with low emotional intelligence needs to be better at their job than a person with high emotional intelligence to get hired for the same position. It certainly fits better with my anecdotal observations than people being distracted by their emotions (surely people with more emotional intelligence need to expend less energy on reading those around them).

This hypothesis is also compatible with the finding that emotional intelligence is associated with better job performance in emotional work. In emotional work, emotional intelligence is a good signal for job performance (indeed it may be better than formal indicators), so promoting someone based on it probably improves the job/employee fit.

I've not read the literature in much depth so I'd be interested to hear if this hypothesis has been tested somewhere.