SIGIR ‘09 Accepted Papers

4 Min Read

Thanks to Jeff Dalton for alerting me to SIGIR 2009 announcing the lists of accepted papers and posters. As Jon Elsas points out, the authorship looks quite different this year than from previous years, with industry showing an especially strong presence:

  • 38% of the papers have at least one author from Microsoft (21 papers), Yahoo! (7 papers), or Google (3 papers)
  • No papers from current UMass researchers (though a number from alumni, and decent representation in the posters)–and the only CMU papers accepted were based on work done during internships.

I’m not sure how to interpret this sudden change. Tighter university budgets? More openness on the part of industry? Regardless, I am excited about the papers. Here are a few (well, ten) paper titles that caught my eye:

  • A Comparison of Query and Term Suggestion Features for Interactive Searching
  • A Statistical Comparison of Tag and Query Logs
  • Building Enriched Document Representations using Aggregated Anchor Text
  • Dynamicity vs. Effectiveness: Studying Online Clustering for Scatter/Gather
  • Effective Query Expansion for Federated Search
  • Enhancing Cluster Labeling Using Wikipedia
  • Formulating Effective Queries: An Empirical S

Thanks to Jeff Dalton for alerting me to SIGIR 2009 announcing the lists of accepted papers and posters. As Jon Elsas points out, the authorship looks quite different this year than from previous years, with industry showing an especially strong presence:

  • 38% of the papers have at least one author from Microsoft (21 papers), Yahoo! (7 papers), or Google (3 papers)
  • No papers from current UMass researchers (though a number from alumni, and decent representation in the posters)–and the only CMU papers accepted were based on work done during internships.

I’m not sure how to interpret this sudden change. Tighter university budgets? More openness on the part of industry? Regardless, I am excited about the papers. Here are a few (well, ten) paper titles that caught my eye:

  • A Comparison of Query and Term Suggestion Features for Interactive Searching
  • A Statistical Comparison of Tag and Query Logs
  • Building Enriched Document Representations using Aggregated Anchor Text
  • Dynamicity vs. Effectiveness: Studying Online Clustering for Scatter/Gather
  • Effective Query Expansion for Federated Search
  • Enhancing Cluster Labeling Using Wikipedia
  • Formulating Effective Queries: An Empirical Study on Effectiveness and Effort
  • Telling Experts from Spammers: Expertise Ranking in Folksonomies
  • When More Is Less: The Paradox of Choice in Search Engine Use
  • Where to Stop Reading a Ranked List? Threshold Optimization using Truncated Score Distributions

The posters look great too! I’m especially curious about these ten:

  • A Case for Improved Evaluation of Query Difficulty Prediction
  • A Relevance Model Based Filter for removing Bad Ads
  • An Evaluation of Entity and Frequency Based Query Completion Methods
  • Analysing query diversity
  • Cluster-based query expansion
  • Evaluating Web Search Using Task Completion Time
  • Has Adhoc Retrieval Improved Since 1994?
  • Is This Urgent? Exploring Time-Sensitive Information Needs in Community Question Answering
  • Relevance Criteria for E-Commerce: A Crowdsourcing-based Experimental Analysis
  • When is Query Performance Prediction Effective?

And, of course, I’m gearing up for the Industry Track. More details will be posted soon–of course, you’ll be the first to know.

Link to original post

Share This Article
Exit mobile version