![]() ![]() The main aim of code retrieval is to search for the most relevant snippet from a corpus of code snippets but unfortunately, code retrieval frameworks for low resource languages are insufficient. ![]() Both the automatic and human evaluation results demonstrate the promising performance of our model, and we have released our code and data to assist other researchers.Ī common practice among programmers is to reuse existing code, accomplished by performing natural language queries through search engines. We also perform a human study to measure how real-world developers perceive the results generated by our model. To evaluate the performance of our proposed model, we conduct a large scale experiment to evaluate the effectiveness of the semantically-equivalent question retrieval task and best code snippet recommendation task separately on Python and Java datasets in Stack Overflow. ![]() Our approach has two main stages: (i) semantically-equivalent question retrieval and (ii) best code snippet recommendation. To alleviate this issue, in this work we present a query-driven code recommendation tool, named Que2Code, that identifies the best code snippets for a user query from Stack Overflow posts. This makes it hard to find and/or even be aware of the most relevant code examples to meet their needs. However, due to the complexity of these online Question and Answer forums and the very large volume of information they contain, developers can be overwhelmed by the sheer volume of available information. This is often considered to be more efficient than working from source documentation, tutorials or full worked examples. More and more developers use Community Question and Answer forums, such as Stack Overflow, to search for code examples of how to accomplish a certain coding task. Stack Overflow has been heavily used by software developers to seek programming-related information. Our main result shows that relative to the control group, users are on average 22% more likely to click on a search result at all on any given day when AQE is active. We evaluate AQE with an A/B test across more than 10,000 unique users on our publicly-available code search instance. We develop Automated Query Evaluation (AQE), a new technique that automatically generates and adaptively runs alternative query interpretations in frustration-prone conditions. We share our observations that lead us to a fresh perspective where code search behavior can straddle seemingly ambiguous queries. This need to disambiguate is one of the primary frustrations we've seen users experience with writing search queries in the last three years. Users often need to disambiguate intent with additional syntax so that a query expresses what they actually want to search. For example, code search queries adopt certain syntactic conventions in the interest of simplicity and terseness but invariably risk encoding implicit semantics that are ambiguous at face-value (a single space in a query could mean three or more semantically different things depending on surrounding terms). At the same time, designing a code search query language poses unique challenges because it intersects traditional search engines and full-fledged programming languages. These preferences stem from users with differing usage contexts, technical experience, and implicit expectations from using prior tools. Our experience shows that users express different and often contradictory preferences for how queries should be interpreted. We face a key challenge in designing a query language that accommodates the needs of a broad spectrum of users. Tens of thousands of engineers use Sourcegraph day-to-day to search for code and rely on it to make progress on software development tasks. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |