Thursday, November 8, 2012

Brand New Regression Tools!


Screen shot 2012-11-08 at 11.34.27 AMBy popular request, today we added a full set of regression tools to FastFig.  The general regression function, reg, can be used to perform single and multivariate least squared regressions using any function as a model.  For the common single variate cases, we have provided five additional functions linear_reg, poly_reg, log_reg, pow_reg and exp_reg.  All of these functions will return the values of the parameter variables, r2, the standard error and the total, regression and residual sum of squares for the regression.  Enjoy!



Wednesday, October 31, 2012

When Will FastFig Have Graphs?

We've had many users ask us when FastFig will have graphing features. If you are familiar with Sage, the computational software that powers FastFig, you know that it includes some graphing capabilities.  You might be asking yourself why FastFig itself does not also include these features.  


We believe in an intuitive and smooth user experience.  We want to defy the learning curve.  The plot command used to create graphs in other languages (including Sage) just doesn't cut it for us.  We are creating a plotting interface that is entirely graphical so that you can create complex technical graphs just like you would in Excel.  


The second reason that we did not use the graphing features of Sage is that they are entirely image based. That would mean that every time you update your plot it would take a second or so to send the changes to the server and generate a new image for you. We can't have that! We are FastFig after all! 


So, in a few weeks time, FastFig will roll out interactive browser-based graphing. FastFig will let you create beautiful graphs that you can use in your reports, presentations and even on your website.  As to why you don't have it yet; we need to make sure it's just right.



Friday, October 19, 2012

5 Cool Things You Might Have Missed In FastFig

By Brian Peacock


For those of you already enjoying the simplicity of FastFig, there are some more subtle features that you might want to make use of to streamline your work flow:




  1. Screen shot 2012-10-19 at 10.56.02 AMUse the Search for Calculations -
    You can type FastFig code into the search box and get a result back.  I use this all the time when I am writing a longer program and want to test a little snippet of code.  I just open up the search tab and solve away.

  2. Paste In Functions - When you search using the search tab on the calculate page, you can press the + button next to each result to paste the function into your code--arguments and all.  You can also do the same thing with variables in the variables table at the bottom of the page.

  3. Change Variable Order - You can click and drag the left side of each variable in the variables table to change their order.  This is convenient for organizing your thinking and for preparing the model for publication since the order of the variables will become the order of the arguments.

  4. Switch the Result View - When you get a result back, use the drop-down menu next to it to switch between FastFig code, LaTeX equation and numerical views.

  5. Make Use of Key Codes



  • Screen shot 2012-10-19 at 10.58.13 AM
    Shift-Enter:  Solve

  • Ctrl/Cmd-S: Save

  • Ctrl/Cmd-F: Find

  • Shift-Ctrl/Cmd-T: Show Tools Tab

  • Shift-Ctrl/Cmd-S: Show Search Tab

  • Shift-Ctrl/Cmd-M: Show Me Tab


Let us know how you are using FastFig!



Wednesday, October 17, 2012

FastFig Launches!!!


Launch
FastFig launched in front of a live audience last night at 7:30 PM EST at the Lehigh Valley Tech Meetup.  After demoing the product, our CTO Brian Peacock led a countdown before the big red button was pushed and the new site went live.  The product was very well recieved by those in attendance and we are excited to hear your feedback.  Currently FastFig is in limited beta and you can sign up to try it at http://www.fastfig.com/newuser/.  


Launch2

Photos by Raveen Beemsingh and Art Kney.



Monday, August 6, 2012

A Quantum Leap in Open Science with Michael Nielsen

 


  by Pat Cotey


If you haven't seen a TED talk, you are missing out on some great ideas and insights. I listened to Michael Nielsen discuss his thoughts on open science again yesterday and just had to share it with you. Even if you have seen it, take the time to watch and listen again (Transcription Below). You won't be disappointed! 


For those of you who have not been initiated, the TED Conference was started in 1984 and was devoted to the mission of "Ideas Worth Spreading" and brought folks together to discuss Technology, Entertainment and Design. Well, TED has spread and there are now two major conferences that you can attend in person or online.  Campuses and other communities sponsorTEDx events and most are posted online so you can search a topic and give yourself the "ultimate brain spa" to quote TED attendees.  


Michael, a pioneer of quantum computation, pitches a call to action for scientists to share science for the greater good. He asks that we all take a step and participate actively in an open science platform or begin an open science project or just inquire as to how colleagues are working on science actively. We have the opportunity to reinvent science using the new tools we have for sharing science and working collaboratively.  By doing so, we can move research forward more quickly to learn, discover and cure problems we all face.


Check out the TED website for more info, http://www.ted.com/


Contact 


Michael Nielson "Open Science Now!" Transcription:



I'd like to begin my talk with a story.  It's a story that begins but does not end with a
mathematician named Tim Gowers. 
Gowers is one of the world's most renowned mathematicians.  He's a professor at Cambridge
University and the recipient of the Fields Medal, often called the Nobel Prize
of Mathematics. 


Gowers is also a blogger. 
And in January of 2009, he used his blog to post a very striking
question: Is massively collaborative mathematics
possible? So what he was proposing in this post was to use his blog to attack a
difficult, unsolved mathematical problem, a problem which he said he would love
to solve completely in the open using his blog to post his ideas and his
partial progress.  What was more;
he issued an open invitation inviting anybody in the world who thought that
they had an idea to contribute, to post their idea in the comments section of
the blog.  Okay, his hope was that
by combining the ideas of many minds, he could make easy work of his hard
mathematical problem.  He called
this experiment the Polymath Project.


Well, the Polymath Project got off to a slow start.  In the first seven hours, nobody posted
any comments.  But then, a
mathematician from the University of British Columbia named Jozsef Solymosi
posted a short comment and it seemed to break the ice because a few minutes
later, a high school teacher named Jason Dyer posted a suggestion and a few
minutes after that, another mathematician named Terence Tao, also a Fields
medalist, posted an idea.  And
things really started to move quickly at this point.  Over the next 37 days, 27 different people would post 800
substantive comments, containing 170,000 words.


I was not a serious participant, but I was following along closely
from the start and it was just amazing. 
The speed with which an idea would be tentatively proposed and then
really rapidly developed by other people and improved, sometimes discarded, is
just amazing.  Gowers described the
process as being to ordinary research as driving is to pushing a car.


At the end of the 37 days, Gowers used his blog to announce that they
have solved the core problem—in fact they have solved the harder generalization
of the problem.  The Polymath
Project had succeeded.  So what the
Polymath Project suggests, at least to me, is that we can use the Internet to
build tools that actually expand our ability to solve the most challenging
intellectual problems.  Well, to
put it in another way—we can build tools which actively amplify our collective
intelligence in much the same way as from Millennia; we've used physical tools
to amplify our strength.


What I would like to talk about today and what I would like to explore
today is what this means for science. 
It is much more important than just solving a single mathematical
problem.  It means an expansion in
the range of scientific problems we can hope to attack at all.  It means potentially an acceleration in
the rate of scientific discovery. 
It means a change in the way we construct knowledge itself. 


So before I get too overexcited however, I would like to talk about some
of the challenges, some of the problems. 
In particular, I would like to describe a failure of this approach.  So it occurred in 2005, or it started
in 2005, a grad student at Caltech named John Stockton had a very good idea for
what he called the Quantum Wiki or Q-wiki for short.  It's a great idea. 
What he did with the Q-wiki—the idea of the Q-wiki—was that it was going
to be a great repository of human knowledge, much like Wikipedia, but instead
of being focused on general knowledge, it was going to be focused on specialist
knowledge in quantum computing. 
It's going to be kind of a super textbook for the field with information
about all of the latest research, about what the big open problems in the field
were, people's speculation about how to solve the problems, and so on.  Like Wikipedia, the intention was that it
would be written by the users, in this case, by experts in quantum computing.  I was present at the conference of
Caltech in 2005 when it was announced and some of the people who I spoke to
were very skeptical but some of the people were very excited by the idea. They
were impressed by the implementation, they were impressed by amount of initial
seed material which had been put on the site, and most of all they were excited
by the vision.  But just because
they were excited didn’t mean that they wanted to take the time themselves to
contribute.  They hoped that other
people would do so, and in the end, nobody essentially was really all that interested
in contributing.  If you look
today, except in a few small corners, the Q-wiki is essentially dead.


And sad to say, this is quite the common story.  Many scientists in fields ranging from
genetics to string theory have tried to start science wikis along very similar
lines.  And typically they have
failed for essentially the same reason. 


It's not just science wikis either.  Inspired by Facebook, many organizations have tried to
create social networks for scientists which will connect scientists to other
people with similar interests, so they can share things like data or code,
their ideas, and so on.  Again, it
sounds like a good idea, but if you join one of these sites you will quickly
discover that they are essentially empty, they are virtual ghost towns.


So what is going on? What is the problem here? Why are these promising
sites failing.  Well, imagine that
you are an ambitious young scientist (In fact I know that some of you are
ambitious young scientists). 
Imagine you are an ambitious young scientist.  You really would like to get a job, a permanent job, a good
job doing the work that you love. 
But it's incredibly competitive to get such jobs.  Often there would be hundreds of very
highly qualified applicants to the positions.  And so you find yourself working sixty, seventy, eighty
hours a week doing the one thing that you know would get you such a job, and
that is writing scientific papers. 
You may think that the Q-wiki is a wonderful idea in principle, but you
also know that writing a single mediocre paper would do much more for your
career and your job prospects than a long series of brilliant contributions to
such a site.  So even though you
may like the idea, you may think it will advance science more quickly, you find
and you just can't conceive of it as being part of your job.  It's not.  The only things which can succeed in this kind of
environment are projects like the Polymath Project, which even though they
employ a non-conventional means to an end, they have an essential conservatism
about them.  The end product of the
Polymath Project was still a scientific paper; in fact it was several
papers.  So “unconventional means
but conventional end”. So there was a kind of conservatism about it.  But don't get me wrong; the Polymath
Project is terrific.  But it is a
pity that scientists can only use tools which have this kind of conservative
nature.


So let me tell you a story about an instance where we moved away from
this conservatism.  It is a rare
story, but the conservatism has been broken.  It occurred in the 1990s, when as you know, for the first
time, biologists were taking large amounts of genetic data particularly in the
human genome project.  And there
were sites online which would allow biologists to upload that data so it could
be shared with other people around the world and analyzed by other people.  Probably the best known of these is the
site GenBank which some of you may have had heard of or used.  And these sites, like GenBank, had the
problem in common with the Q-wiki that scientists (they are not paid or
rewarded for sharing their data; it's all about publishing papers), and so
there was a considerable reluctance to actually upload the data.  Everybody could see that this was
silly, but it was obvious that this was the right thing to do.  But just because it was obvious didn’t
mean that people were actually doing it. 
And so a meeting was convened in Bermuda in 1996, of many of the world's
leading molecular biologists and they sat and they discussed the problem for
several days and they came up with what are now called The Bermuda Principles,
which state that first, once human genetic data is taken in the lab, it should
be immediately uploaded to a site like GenBank, and two, that the data would be
in the public domain.


And these principles were given teeth because they were taken by the
big scientific grant agencies, the US National Institutes of Health, the UK
Welcome Trust.  Next they baked
into policy.  So it meant that if
you were a scientist who wanted to work on a human genome, you had to agree to
abide by these principles.  And
today, I am very pleased to say, as a result you can go online, anybody here
can download the human genome.  So
that's a terrific story, but the human genome is just a tiny, tiny fraction of
all scientific knowledge.  Even in
just other parts of genetics, there is so much knowledge that is still locked
up.  I spoke with one
bioinformatician, he told me that he had been "sitting on the genome of an
entire species for more than a year".  An entire species! And in other parts of science, it is
retained that scientists hoard their data, they hoard the computer code that
they write that could be useful potentially to other people, they hoard their
best ideas, and they often hoard even the descriptions of the problems that they
think are most interesting. 


And so what I and other people in the open science movement would like
to do is: we'd like to change this situation.  We would like to change the culture of science so that
scientists could become much more strongly motivated to share all of these
different kinds of knowledge.  We
want to change the values of individual scientists so that they start to see it
as part of their job to be sharing their data, to be sharing their code, to be
sharing their best ideas and their problems.


So you know, if we can do this, then this kind of change in values,
then we will indeed start to see individual scientists rewarded for doing these
things.  There will be incentives
to do them.  It's a difficult thing
to do, however.  Yes, we are
talking about changing the culture of an entire large part of science.  But it has happened before, once in
history.  Right back at the dawn of
science, Galileo, 1609, he points his telescope up at the sky towards Saturn
and he sees for the first time in history what we now know are the rings of
Saturn.  Does he tell everybody in
the world? No.  He doesn’t do
that.  He writes down a description
privately and then he scrambles the letters in the description into an anagram
and he sends that anagram to several of his astronomer rivals.  And what this ensures is that if they
later make the same discovery, he can reveal the anagram and get the credit,
but in the meantime, he hasn't given up any knowledge at all.  And I'm sad to say that he was not
uncommon at the time.  Newton,
Huygens, Hooke, Leonardo—they all used similar devices, okay.


The printing press have been around for 150 years by this time, and
yet there was a great battle in the 17th and 18th centuries to change the
culture of science so that it became expected that when a science made a
discovery, they would reveal it in a journal.  And that's great, that change has happened, it's
terrific.  But today, we have new
technologies, we have new opportunities to share our knowledge in new ways, and
the ability to create tools that actually allow us to solve problems in
entirely new ways.


So we need to have a second "Open Science Revolution".  It is my belief, that any publicly
funded science should be open science. 
How can we achieve this change? Well, if you're a scientist (and I know
many of you are not scientists), but if you're a scientist then there are
things that you can do.  You can
get involved in an open science project, even if it's just for a small fraction
of your time.  You can find forums
online where you can share your knowledge in new ways—ways that allow other
people to build on that knowledge. 
You can also (if you're more ambitious) start an open science project of
your own.  If you're really bold,
you may wish to experiment with entirely new ways of collaborating in much the
same way as the Polymath Project did. 
But above all, what you should do is be very generous in giving credit
to those of your colleagues (you are practicing science in the open) and to
promote their work.


These only conservative scientific values that look down on these
activities—the sharing of data, the blogging or using of wikis and so on—you
can reject those conservative values and engage your scientific colleagues in
conversation to promote the value of these new ways of working to emphasize
that it takes bravery to do these things, particularly by young
scientists.  It is through such
conversation that the culture of science can be changed.


So if you are not a scientist, there are also things that you can
do.  My belief is that the single
most important thing that we can do to give impetus to open science is create a
general awareness amongst the population of the issue of open science and of
its critical importance.  If there
is that general awareness, then the scientific community will inevitably find,
it will be dragged by the population at large in the right direction.  There are simple things you can
do.  You can talk to your friends
and acquaintances who are scientists. 
Just ask them, what are they doing to work more openly? Or, you can use
your imagination and your personal palette to raise your awareness in other
ways.  We are talking about
changing not just what scientists do, but what grant agencies do, what
universities do, and what governments do, and you can influence all of those
things.


Our society faces a fundamental question: what kinds of knowledge are
we going to expect and incentivize our scientists to share? Will we continue as
we have done in the past? Or will we embrace new kinds of sharing, which lead
to new methods for solving problems—an acceleration in the process of science
entirely across the board?


My hope is that we will embrace open science and really seize this
opportunity that we have to reinvent discovery itself.




Thursday, August 2, 2012

4 Ways to Spot Phony Data In the Media

Whenever we turn on our televisions or read the paper, we see statistics and figures regarding everything from the probability of rain, to the rate of employment increase or decrease in the engineering sector, to the total number of babies born in the last forty-eight hours. Numbers are generally viewed as factual. If you add one and one, you always get two. However, like any other type of data, numbers can be manipulated, and how the data analysis is presented is as important, if not more so, than the “facts” that are generated from that analysis.


When viewing statistics, look for the following red flags, which may indicate that you are viewing manipulated data.


1. Before believing a statistic, make sure that the company or organization did not “throw out” data that was negative, or which did not prove the point they wanted to make.


2. Find out exactly what was surveyed or what questions were actually asked of participants. Often, the data presented may have been gathered based on a completely different question or issue.


3. Make sure the statistics actually apply to the group that was being used for the analysis. Asking 15 dentists whether dancers should receive healthcare will generate a very different response from asking 15 dancers whether dancers should receive healthcare.


4. Find out as much as you can about the polled group before believing the data. A supposedly “random” study of 1000 people is not so random if all of the people studied are students at the same university, in the same major, and are all the same age.


At FastFig, we believe strongly in data transparency; in fact, we are building a numerical platform centered around it.  We believe that by arming the world with solid facts, we will all be able to make better decisions to solve the world's problems, big and small. 



Friday, July 27, 2012

I Guess Video Games Can Make Kids Smarter

Laptop_kidIn the last five years, the use of video-game-like math software in K-12 educational programs has grown exponentially. Every day, more studies are released regarding whether or not the use of software is actually making an appreciable difference in the mathematical skills of children and young adults. The findings are wide-ranging, and the results often seem to have more to do with how the software is incorporated into the current curriculum as opposed to the software itself. However, this statistical variability regarding efficacy has not stopped a number of companies from releasing their own educational software, and some companies have found alternative ways of packaging their products.


Programs like DimensionU have been especially successful. Part math software, part video game, the equation-based product requires students to solve Algebra puzzles to advance. It can be played against other teams all over the world via the Internet, and is currently used in a number of junior high and high school classes around the world. In Hawaii, it is currently being used at Waipahu High School, and since its implementation, 80% of the students have increased their math scores. As Waipahu High is the only school in Hawaii using the program, it has provided a highly focused sample of whether the software is having the desired affect. In this case, the answer is clear.


Kids spend a tremendous amount of time playing video games; according to a Kaiser Family Foundation Study, an average of 1:13 min per day.  Think of the impact if this time were redirected towards education.  But will a math video game every be as fun as blowing away zombies or flying your very own x-wing?  That remains to be seen.



Monday, July 23, 2012

Open Publishing FAQ

Journals on shelf iStock_000008730651XSmallby Pat Cotey


Publishing in so many business sectors is struggling to find a new working business model. Scientific journals are no exception. Adding to the problem, scientific publishing is no longer best serving its readers. Scholarly journals were started as a means of disseminating scientific findings to interested scholars. At the time, it was the best and most economical means to share knowledge and new research.  Below are some answers to frequently asked questions about online, open access publishing.


How does the current system work?


When a scientist publishes in a journal, he is foregoing his copyrights to have the publisher print his findings. He is not compensated for publishing or peer reviewing (field specialists edit and review drafts to insure original research, review research methods, check for inconsistencies, clarity, etc.) a colleague’s article. This sounds unfair, but in many ways, scientific publishing is akin to submitting a report to your boss in other industries. Scientific articles are part of the job, included in a researcher’s salary. Compensation for a peer review would be a significant conflict of interest. Currently, publishers profit from the journals, not scientists.


How do journal subscriptions hurt science?


Subscription fees are rising quickly and universities, libraries, and research labs are tightening their belts to control fees. Subscription prices in the UK increased over 200% in the past ten years (1). As reported in The Guardian, Harvard University is billed $3.5 million dollars per year by journal publishers. These fees are limiting access by scholars as facilities have to go without needed journals to meet budgets.As scientists, we access journals through our institutions or libraries and don’t always think about the associated costs, until we are denied access. Journal publishing is a closed market with no pressure from the scientists who contribute and use these resources to make any changes, as the users aren’t typically paying directly for the service.


Is there an open access publishing model that is sustainable?


The Wellcome Trust, one of the largest providers of non-governmental scientific funding worldwide, commissioned two research studies in the scientific publishing sector. Firstly, they wanted to understand the economics of the publishing industry. Secondly, they wanted to explore alternative business models that “could enable research to have the quality assurance it needs (peer review), while using the Web as the publication medium and being available for free”(2). 


Looking at different business models for producing journals, the Trust first concluded that open access journals are better than traditional journal models for improving access to research. Traditional journal models profit by selling subscriptions to universities, schools and libraries. There are other “access tolls” such as site licenses and pay-per-view that add to the revenue stream. An open access journal model charges the author to publish and distributes the journal for free.


By charging researchers a fee to cover expenses for peer review, journal production, online hosting and archiving, an open access publishing model is sustainable. These fees can often be paid through research grants, waived in some cases and discounted through member affiliations. It is the variable costs of traditional publishing models (subscription management, license negotiations, sales, marketing and distribution) that are significantly reduced in the open access model.  


The Trust researched comparative costs of producing a good quality journal using the current subscription model and found an average cost of US $2750 to produce an article.  Using an author-side payment model, the associated cost to produce an article was US $1950. More study details are available in the article authored by Robert Terry (3).


 Does Open Access signal the end of scientific publishing?


No, open access journals still need publishers.  Some journals are experimenting with a mixed access model where the journal is printed and published and, six months later, the article is made available electronically. The oversight of peer reviews, formatting, archiving and added search capabilities are still required in print and online publishing. Full web integration requires functionality in the searchable repository of articles and creates the need for increased curating of online journals. There is no doubt that some tweaks and modifications will be encountered as scientific publishing moves toward an open access online publishing model.


Is online publishing better then print publishing?


With internet access, there is no longer a primary need to print and mail journals to libraries and institutions around the world.  In fact, this process is more costly, time consuming and less efficient than other publishing models.  A journal that is fully integrated with the web offers additional benefits and unsurpassed accessibility to all users.  Online publishing allows linking to sources, interactive content such as graphs, videos and even interactive programs.


You guys love to talk about Open Science. How does FastFig fit in?


FastFig will be an openly available repository of scientific equations built so that those equations can be used easily and immediately.  While the repository will not be peer reviewed, FastFig will be an important tool for better integrating scientific publications into the web and a forum for discussion about scientific modeling.


Contact 


 


1. LISU (2002) LISU annual library statistics 2002. Leicestershire: LISU.


2. Wellcome Trust (2004 April) Costs and business models in scientific research publishing: A report commissioned by the Wellcome Trust. Available: http://www.wellcome.ac.uk/assets/wtd0031​84.pdf. Accessed 19 January 2005.


3. Terry R (2005) Funding the Way to Open Access. PLoS Biol 3(3): e97. doi:10.1371/journal.pbio.0030097



Thursday, June 7, 2012

Tax-Payer Funded Research: Shouldn't We Have Access?

Test tubes in rack with analyticsby Pat Cotey


Today, I signed the “We the People” petition to require free internet access to scientific journal articles that are published using taxpayer-funded research. I was signer number 25,731. Not as impressive as being one of the signers of the Consitution or the Declaration of Independence, but my opinion is being counted. Maybe collectively, our signatures will gather the support we need to make scientific research more accessible to everday folks as well as researchers. An official response from the White House is a great start, and with over 25K signatures, that will happen.


The National Institute of Health has already taken this step to open data access. And it has been wildly successful in sharing the publications and results of its researchers across the world.  Individuals struggling to make sense of a medical diagnosis of a loved one can now search through NIH peer-reviewed journal articles to find pertinent information or better understand clinical trial options. Articles are deposited in PubMed Central and include all published manuscripts of NIH-supported research. Manuscripts are added not later than 12 months after publication.



"We are now capable of taking individual discoveries and integrating them with all other research findings—both publications and data. Scientists can connect the dots between discoveries instantly, an advance analogous to moving from searching for fingerprint matches manually to matching prints in a database of millions in an instant."1 


Elias A. Zerhouni, MD, Director NIH, US Dept. of Health and Human Services



As the cost of scholarly journals escalates, libraries are facing the need to cut back on subscriptions, forcing researchers to pay high fees of $30–$50 per article to access information and data that may be of pertinent value to the next breakthrough. Access to all tax-payer funded research would open an enormous wealth of information to scientists, researchers, educators, patients and the interested public. Why are we making innovations more difficult by limiting access to data that could move us forward?


What are your thoughts on public access of tax-payer funded research? I urge you to consider signing the “We the People” petition. The petition has already reached the 25,000 signature mark within the 30 day time period so it gets an official Administration response. Let’s double the effort and send an even stronger message. We have until June 30th, so click here to sign, http://bit.ly/MdS3Np


Contact 


1. Excerpt from Public Access Policy of the National Institutes of Health Statement, Sept. 11, 2008, http://www.nih.gov/about/director/publicaccess_testimony.htm
 


 



Wednesday, May 30, 2012

To Go Where No Math Has Gone Before

Moonflag


by Brian Peacock


We have many goals for FastFig, but one of the things that gets me most excited is the prospect of putting powerful and often complex math models in the hands of the non-technical crowd.  A little bit of background: 


FastFig will include a sharing system where users can share a model—an equation or snippet of FastFig code—with the world.  Each model will have its own page that will include information about it and a simple inputs-in-answer-back interface.  It doesn't matter how complex the model is, anyone will be able to use it by simply entering values for the different fields and pressing the equals button.  FastFig will spit back an answer.


Why am I so excited about this?  Because it will give EVERYONE access to computations that were once privy only to those technically trained (mathematicians, engineers, scientists, etc.).  What will people do with this new power?  I really can't wait to find out.  Can we use it to save energy?  Or perhaps better understand economic issues?  To visualize population growth?  Or to understand the impact of a of a construction project?  And researchers will be able to share their findings in a way that can be used by anyone NOW.  Using science in ways we never have before.  That's the prospect that keeps me working late into the night.  



Monday, May 14, 2012

What We Scientists Need in a Computational Tool

IStock_000017254631XSmall


by Brian Peacock


At FastFig, we are a team of scientists and engineers that, frustrated with the existing computing tools, branched out to create something better.  In this post, we would like to share with you what we, from experience, feel is needed in a 21st century computational tool.  


1.  A Great User Interface - No one has time to waste on a lousy user interface, yet so many programs today have them.  With tomorrow's computational software it should be effortless to get started and help should always be one click away.  That's just good design.


2.  Easy Sharing - What use is creating anything if it can't be shared?  Modern software should be built for sharing whatever the application.  In science, we need to be able to communicate our results to both experts and non-experts alike and we believe software can help.


3.  Cloud Computing - Today we have vast computing resources at our disposal.  Yet, due to the technical complexity of accessing these resources, the majority of us are not able to use them.  Software should automate our access to the cloud.


4.  Social Interaction - It seems everything is social today, why not science?  And we don't simply mean adding 'like' buttons to every page.  We mean true social interaction between scientists at all times.  From validating results to project collaboration, we can make science better through social communication.


5.  Full Web Integration - The most incredible thing about the web is the ability for programs to easily work together by sharing data and computing resources.  The next scientific computing tool should play well with others.


What's important to you?  What do you want?  What do you need?  Let us know and we will build it.



Wednesday, May 9, 2012

Why Science Should be About Users not Readers

12_5_9by Brian Peacock


The way we create and share scientific knowledge has changed very little since the journals Philosophical Transactions of the Royal Society and Journal des S├žavans were first published on a regular basis in 1665.  From then on, nothing was really science unless it was published in a peer reviewed journal.  Today the scientific journal industry has grown into a wide and (some feel unfairly) profitable industry.  With the rise of the internet, little has changed about the way science is shared.  Journals may be online, better indexes have been developed, but at the heart of it all is static content that must be digested by a knowledgeable reader.  Note that I use the word reader.  We do scientific research with the hope of eventually using this knowledge.  The current system is geared towards the science reader and not the science user.  Below are 4 reasons that we need a more user-centric way to share science:


1.  Journals are data poor


Scientific journals have limited space for figures and data tables and typically only include the most significant data.  This is nice for the reader but what about the scientist or engineer who might want to use this data to support another argument?  Some journals do allow online attachments for this kind of content but many do not.  The interested user must therefore contact the author to attain this information which is often a time consuming if not fruitless task.


2.  Data is not easily aggregated


Test and then test again is a mantra of current scientific process.  Complete comparison between similar studies occurs only once someone has decided that it is worth their while to write a review paper summarizing the current thought on a topic.  While there will always be a place for these articles, it should be for describing processes rather than aggregating data.  In user-centric sharing, new data in a field should be tacked on to old data and analyzed as a whole.


3.  Equations and algorithms are not immediately useful


My field is in the area of environmental modeling.  As modelers, we use equations to describe the behavior of a physical system.  With current publishing practices, the model is published as a set of equations that only a skilled technical user can implement.  If the user is lucky, the author of the paper will have shared the code or stand alone program that can be used to solve the model.  However, in many cases, this code is difficult to implement as a part of another model, especially by the nontechnical user.


4.  Data from 'failed' or 'amateur' experiments is neglected


Data is useful in science.  Period.  Provided this data is accurate, the source is irrelevant.  We have two major neglected sources of valuable data: 'failed' experiments and 'amateur' experiments.  Failed experiments are not really failures at all but rather a discovery of what is not true.  The data that is collected in these experiments often goes unpublished despite its potential use in a related study.  Similarly, amateur experiments are rarely published due to a perceived lack of credibility.  There is still inherent value to the results produced by amateurs; the data simply has larger error bars.  For example, environmental science classes all over the world collect water quality data on a regular basis as an educational exercise.  This data is rarely reported despite the value that such data could have to environmental scientists and policy makers.


The internet has great potential to solve many of the problems associated with static journal content.  After all, the internet was first devised as a way to share academic materials.  Let's use technology to build a more user-centric scientific system.