Postgrad LIVE! Study Fairs

Birmingham | Edinburgh | Liverpool | Sheffield | Southampton | Bristol

University of Bristol Featured PhD Programmes
University of Kent Featured PhD Programmes
Imperial College London Featured PhD Programmes
Engineering and Physical Sciences Research Council Featured PhD Programmes
Max Planck Society Featured PhD Programmes

PhD Discussion Forum

The following thread is brought to you by our sister Web site PostgraduateForum.com. If you wish to reply or post your own thread, you will be redirected to this site.

This Category:   PostgraduateForum.com > PhD Advice / Support


Message

will bad data / methodological issues fail my thesis?


User: mrkdsmith - 14 December 2011 12:38

======= Date Modified 14 Dec 2011 12:41:33 =======
======= Date Modified 14 Dec 2011 12:40:24 =======

Hi, I am looking for some advice.

I am in the final stages of writing my thesis... or so i thought. I have battled this thesis for most of this year and now All chapters are more or less finished, except one! I had comments back for my final data chapter which are not good.

Basically without going to go into the nitty gritty details, the story i have written the chapter in the following vein:

We conducted an experiment and hypothesised a certain result. We did not find the hypothesised result, but we did find something else which is interesting but is not related to the original aims. The reason the original hypothesised results were not found were because of assumptions initially made in designing the experimental paradigm were (with the benefit of hindsight) incorrect. Some suggestions for adapting the paradigm to identify the initially hypothesised effect are suggested.

My supervisor is not happy with this as comes across that the experiment failed (which it did!) and the implication that the experiment was badly designed would look very bad and most likely not pass the viva. Unfortunately as this was the final study in the phd, this issue didn't come to light until quite late. There is no time to do another set of experiments. also I am on on full-time post doc now so time-wise its not feasible for me to do any more work, other than thesis writing / editing and i need to get it submitted pronto. He suggests I remodel my aims so they are a bit more in line with the results (which I don't feel comfortable with). He has also suggested further data analysis which is going to be near impossible: Juggling the post-doc and the thesis revisions is hard enough! I have also analysed this data so many times and I am more than certain that the effect we want r is not there.

I just want to ask for some impartial advice from anyone who has been in this situation (preferably someone who has gone through and survived the viva!) Have people gotten through their viva when the data doesn't match up to their hypotheses, or that the method which they thought was appropriate later turned out not to be inappropriate? I'm sure many other Phd students would have been in a similar situation to this. Surely not every experiment is perfectly designed in the first instance. Surely allowances can be made by the examiners provided the student can identify what went wrong, why it went wrong and explain how to make it right in future experiments... or am I being complacent in thinking this? I just want to know how other people dealt with this problem.


If anyone can give me some, advice, encouragement, warnings or the like, I would be eternity grateful!

Thanks very much!

User: tather - 14 December 2011 17:18

======= Date Modified 14 Dec 2011 17:28:25 =======
======= Date Modified 14 Dec 2011 17:19:54 =======

I had pretty much the same experience with my first chapter. I conducted an experiment, the treatment failed but I noticed an interesting result not covered by the original aims. I was given the same advice as you, write it up with the new result as the main objective. This may not make sense to you, but that is not the point. The point is that you found a result that is statistically valid and theoretically sound, your job is now to communicate that result. Basically the original objective / hypothesis confuses the reader.

But I'm sure other people may have different opinions on this, anyone care to comment ....

User: Noctu - 14 December 2011 19:24

I agree with Tather. Also showing that you can constructively criticise your methodology shows that you have matured and developed as a researcher. You may get a question about your original methodology in your viva - might be worth preparing for that and thinking about which alternative methodology you would have used if you could do it again (and other research directions you could take).
I'd say that many PhD students have the same experience. Bear in mind that the PhD is increasingly seen as a form of research training rather than being an expert on a very narrow field and as long as you can describe your methodology, the pros and cons of it, and future directions then you'd definitely benefit! (up)

User: cplusplusgirl - 15 December 2011 00:21

I completely agree with the last two posts, and it is waht your supervisor is trying to push you into doing too!

A failed experiment design for one aim produced some interesting results. Assuming you repeated the experiment many times, you then actually DID use this design for a new aim, and a set of results. This is how you have to write it up. (In the positive).

Hope this makes sense.



User: Skig - 15 December 2011 11:03

I didn't get what I expected at all and let's be honest, if you hadn't conducted your research, then the field would still be thinking your methods to be best for the topic ;-) so what have you learned that can be passed on to others? Do your results make any sense at all? Do they contradict previous research? What could they possibly mean?

It sounds like you've already identified issues with the methodology and, as long as you're able to justify why this was found to be the most suitable method at the time, you should be fine.

I agree that remodelling aims is not the best solution. I've not had my viva yet but I'm prepared to be true to my research and say 'I can't be sure why my results came out like this for certain but I have a few theories bla bla' and 'more research is needed to further investigate why bla bla'.

User: mrkdsmith - 16 December 2011 11:04

Thanks every one for the advice.

another issues i've not mentioned is that this final chapter is based on results form two previous chapters in the thesis. If I remodel the aims to fit the findings, it will ruin the logical flow of the thesis... One thing that was suggested was that i remodel the aims into a very basic broad aim, that will encompass the expected findings and the actual findings. i think i can afford to be less specific.... as the methodology of this chapter is a mixture of established (experimental) methods and new (analytical) methods. Like I said i did find something interesting, and that is consistent with previous research, and and a broad aim will cover this finding, and allow comments on how these results do not match up to the findings from the previous chapter, without sounding so negative. Does this sound logical?

Like i said, i am more than confident in critiquing the methods I've used (although i don't think my supervisor has the same confidence in me!) The issue for me is more, trying to make the results sound as positive as possible with less emphasis of the experiment as having 'failed'.

Thanks again

User: Tudor_Queen - 08 February 2017 18:29

Hi all, I've come across this old thread and really interested to hear people's thoughts on it. I'm not faced with this situation myself (not now anyway), and am not sure what I'd do if I were. On one hand it seems OK, as justified by the above posters. But in the light of lots of recent talk about p-hacking, especially in the field of psychology, I wondered what others thought. Would you do it? Is p-hacking talked about at your uni?

User: TreeofLife - 09 February 2017 10:44

Yes interesting post! I've not heard of p-hacking per se, is that like manipulating stat results?

But I think change the hypothesis to fit the results in the case above - it really doesn't matter what the thoughts were initially, as long as the data is robust. That's what we would do in molecular biology anyway, but then we don't really work with rigid hypotheses, it's usually more experimental ie it could be this or that, let's test it to find out, let's see if this works etc.

User: Tudor_Queen - 09 February 2017 21:21

Yes, manipulating but not necessarily directly. Some examples are monitoring your data as you are still collecting it (e.g., testing to see if there is a significant result yet), entering covariates so that you get a significant result, only reporting comparisons that yielded statistical results (which means in reality you may have performed many tests on the same data, which didn't yield significant results, increasing your chances of Type I error/false positives). That's as I understand it anyway. We had some a meeting about it, as apparently it is common practice but now there are papers on the damage it can do (e.g., hard to replicate findings, bogus findings) and... well, I think it still happens but people are quiet about it. At the end of the day, it can be the difference between getting published or not... or maybe having a more impressive thesis.

I think if the plan is to conduct exploratory analyses at the onset it is different than those who claim to have set out to test one thing, but secretly tested 10 different things.

User: sisyphus - 18 February 2017 22:07

Speaking from the viewpoint of a statistician...

You should not go back and retrospectively change your aims. To do so is misleading, and if it comes up in the Viva and you have a good examiner should mean major corrections as a minimum since you have been fiddling, showing a lack of understanding of the scientific method (and statistics). Your study was powered to detect an effect, which you did not find. Maybe the effect just didn't show due to chance, or maybe it isn't there, these are your type I and type II errors - make sure you understand both and can comment on them.

You can however then discuss the interesting finding you did have, the interpretation of this, and that it is 'statistically significant' (a term with weak meaning, and which you should only deploy if you know what it means). Suggest also what further research might happen to test this effect more.

What you have done is essential data mining - digging through the data for interesting findings. This is legitimate for forming hypotheses, but is different to testing that hypothesis, which is why you set up your study a priori.

User: minta - 13 October 2017 19:33

Hi!!

I came across this post as I was disparately searching for people who had same troubles as I did. I had major corrections in my viva, as heart breaking as it is, i am determined to finish and obtain my degree no matter what.
Among the corrections, I found few questions inquiring about some weird results that are not in accordance with published work. I searched a lot, and got to know that i made a mistake in media composition which resulted in this different results! Now I have no problem, I actually am happy I got to know thins before I submitted again.
I was wondering, how to put this into results and discussion Part now? should i simply leave those three exps as they are and indicate the reason honestly (which I mostly want to do)? or should I recreate things which I do not think is going to go which the main aim of my thesis?
I have no way to repeat these exps at all and the committee do now want me back to lab. What do you think?





FindAPhD. Copyright 2005-2018
All rights reserved.