The Latest from Blogs and Journals

Browse recent content from academic blogs and journals

2017-03-08
Abstract

Prior scholarship overlooks the capacity of other actors to raise the political costs of unilateral action by turning public opinion against the president. Through a series of five experiments embedded in nationally representative surveys, we demonstrate Congress's ability to erode support for unilateral actions by raising both constitutional and policy-based objections to the exercise of unilateral power. Congressional challenges to the unilateral president diminish support for executive action across a range of policy areas in both the foreign and domestic realm and are particularly influential when they explicitly argue that presidents are treading on congressional prerogatives. We also find evidence that constitutional challenges are more effective when levied by members of Congress than by other actors. The results resolve a debate in the literature and suggest a mechanism through which Congress might exercise a constraint on the president, even when it is unable to check him legislatively.

2017-03-08
Methodology Blogs

Paul Alper points in a comment to an excellent news article by James Glanz and Agustin Armendariz:

Dr. Carlo Croce is among the most prolific scientists in an emerging area of cancer research . . . a member of the National Academy of Sciences, Dr. Croce has parlayed his decades-long pursuit of cancer remedies into a research empire: He has received more than $86 million in federal grants . . .

Over the last several years, Dr. Croce has been fending off a tide of allegations of data falsification and other scientific misconduct, according to federal and state records, whistle-blower complaints and correspondence with scientific journals obtained by The New York Times.

In 2013, an anonymous critic contacted Ohio State and the federal authorities with allegations of falsified data in more than 30 of Dr. Croce’s papers. Since 2014, another critic, David A. Sanders, a virologist who teaches at Purdue University, has made claims of falsified data and plagiarism directly to scientific journals where more than 20 of Dr. Croce’s papers have been published. . . .

From just a handful of notices before 2013 — known as corrections, retractions and editors’ notices — the number has ballooned to at least 20, with at least three more on the way, according to journal editors. Many of the notices involve the improper manipulation of a humble but universal lab technique called western blotting, which measures gene function in a cell and often indicates whether an experiment has succeeded or failed.

Hey—this sounds pretty bad!

Despite the lashing criticisms of his work, Dr. Croce has never been penalized for misconduct, either by federal oversight agencies or by Ohio State, which has cleared him in at least five cases involving his work or the grant money he receives. . . . Now, in the wake of those and other questions from The Times, the university has decided to take a new look to determine whether it handled those cases properly. . . . Whatever the outcome of that review, Mr. Davey said, decisions on research misconduct at Ohio State were based solely on “the facts and the merits of each individual case,” not a researcher’s grant money. Any other suggestion would be “false and offensive,” he said, adding that the university has “spent significantly more to support his research program than he has brought in from outside sources.”

Sunk cost fallacy, anyone?

But let’s hear Croce’s side of the story:

During an interview in October, and in a later statement, Dr. Croce, 72, denied any wrongdoing . . . “It is true that errors sometimes occur in the preparation of figures for...

Revolutions
2017-03-08
Methodology Blogs

by Le Zhang (Data Scientist, Microsoft) and Graham Williams (Director of Data Science, Microsoft)

Employee retention has been and will continue to be one of the biggest challenges of a company. While classical tactics such as promotion, competitive perks, etc. are practiced as ways to retain employees, it is now a hot trend to rely on machine learning technology to discover behavioral patterns with which a company can understand their employees better.

Employee demographic data has been studied and used for analyzing employees’ inclination towards leaving a company. Nowadays, with the proliferation of the Internet, employees’ behavior can better be understood and analyzed through such data as internal and external social media postings. Such data can be leveraged for the analysis of, for example, sentiment and thereby determination of an employees likelihood of leaving the company. Novel cognitive computing technology based on artificial intelligence tools empower today’s HR departments to identify staff who are likely to churn before they do. Through pro-active intervention HR can better manage staff to encourage them to remain longer term with the company.

This blog post introduces an R based data science accelerator that can be quickly adopted by a data scientist to prototype a solution for the employee attrition prediction scenario.  The prediction is based on two types of employee data that are typically already collected by companies:

  1. Static data which does not tend to change over time. This type of data may refer to demographic and organizational data such as age, gender, title, etc. A characteristic of this type of data is that within a certain period they do not change or solely change in a deterministic way. For instance, years of service of an employee is static as the number increments every year.
  2. The second type of data is the dynamically evolving information about an employee. Recent studies revealed that sentiment is playing a critical role in employee attrition prediction. Classical measures of sentiment require employee surveys of work satisfaction work. Social media posts become useful for sentiment analysis as employees may express their feelings about work. Non-structural data such as text can be collected for mining patterns which are indicative of employees with different inclinations for churn.

Attrition prediction is a scenario that takes historic employee data as input to then identify individuals that are inclined to leave. The basic procedure is to extract features from...

2017-03-08
Methodology Blogs

In our recent discussion of Ted doubling down on power pose, commenter Michael raised an interesting question:

I think the general attitude of most people who work on communicating science to the public is that their responsibility is only to make sure that any information they present has a source with the proper credentials (published in a peer-reviewed journal, endorsed by PhD experts in the relevant disciplines at universities). Since they are not themselves PhD experts, the feeling is that “Who am I to challenge this expert? I am just telling you what my expert says, it’s not my job to get involved in these obscure internal arguments”. . . . If Slate can let Andrew Gelman write an article, or Retraction Watch can publish an interview with him expressing his position without publishing comments from experts with objectively equal qualifications who disagree, why can’t TED let Amy Cuddy put out her ideas? How should someone outside of the relevant disciplines be expected to know when what an expert is saying needs to be challenged? I can’t think of a good solution.

I replied as follows:

One difference between Cuddy’s Ted talk and my Slate articles is that I take the other side of the argument seriously, even if I express disagreement.

For example, today in Slate I looked into Jon Krosnick’s claim that the outcome of the 2016 election was determined by Trump being listed first on the ballot in many swing states. I concluded that it was possible but that I was skeptical that the effects would’ve been large. True, Slate did not invite Krosnick to respond. But in my article I linked to Krosnick’s statement, I clearly stated my sources of evidence, I linked and took seriously a research article by Krosnick and others on the topic . . . I did my due diligence.

In contrast, the Ted team avoids linking to criticisms of Cuddy’s work, and I do not consider her statements to be in the full spirit of scientific inquiry. It seems like a damage control operation more than anything else. As to the original Carney, Cuddy, and Yap article: as I noted above, it makes a claim in the abstract that is not supported by anything in the paper. And more recently Carney gave a long list of problems with the paper, which again Cuddy is not seriously addressing.

This response is fine as far as it goes, but I realized something else is going on, which is that Slate and Ted and other media outlets get multiple...