No earthshattering news from the meeting (not, as you could tell from my last post, that I necessarily expected any.) I did make good use of some of the time, though, by taking a stack of scientific literature with me and (for once) actually reading most of it.
Like most researchers, too often I equate photocopying a paper (or, for the more recent years, printing off a copy of the PDF file) with reading it. This was brought home to me again as I made my way through this stack. The papers I'd taken with me are the ones that come closest to the idea that I've been (obliquely) talking about for the last few weeks. I was going over them hoping to pick up some relevant details that could help out in the next experimental tests.
That I did. I'm now convinced that the experiment I ran last week had almost no chance of actually working, and I'm almost equally convinced that I know why. (I generally find my own arguments pretty convincing, which is a mental habit that can be an equally great strength or handicap. You never really know which until it's too late, though. . .)
At any rate, I think that there's a key variable in my experimental setup that I've wrongly estimated. It'll take a few days to rearrange things to put that hypothesis to the test, but that's the next order of business. Of course, if I get everything lined up and things still don't work, I haven't proven anything. But the changes I'm making make logical sense to me (and to my other co-workers who are helping out or following along.) If things don't work this time, at least I'll feel that I've given them every chance to. And I'll be incorporating these changes in the future variations I've spoken of (the ones that, intrinsically, I think have a better chance of working.)
Whenever you change something in your experimental design, there's always a nagging fear that you're unknowingly about to abandon the only conditions that could make things work properly. In this case, the good part is that the original setup I chose is still available. It's the starting point for the new one, and I can (and will) still take data under those conditions as I go on into the new conditions.
Of course, the downside of testing things out this thoroughly is that your original idea is on the chopping block the whole time. Getting all the variables figured out, thinking through just how you want to run things - these could be just sharpening the blade. The nerve-racking thing about science is that we really do prove things. And sometimes we prove ourselves wrong. . .