يعرض 1 - 10 نتائج من 2,349 نتيجة بحث عن '"Pujar, A"', وقت الاستعلام: 1.64s تنقيح النتائج
  1. 1
    مؤتمر

    المصدر: 2024 International Conference on Integrated Circuits and Communication Systems (ICICACS) Integrated Circuits and Communication Systems (ICICACS), 2024 International Conference on. :1-6 Feb, 2024

    Relation: 2024 International Conference on Integrated Circuits and Communication Systems (ICICACS)

  2. 2
    تقرير

    الوصف: The availability of Large Language Models (LLMs) which can generate code, has made it possible to create tools that improve developer productivity. Integrated development environments or IDEs which developers use to write software are often used as an interface to interact with LLMs. Although many such tools have been released, almost all of them focus on general-purpose programming languages. Domain-specific languages, such as those crucial for IT automation, have not received much attention. Ansible is one such YAML-based IT automation-specific language. Red Hat Ansible Lightspeed with IBM Watson Code Assistant, further referred to as Ansible Lightspeed, is an LLM-based service designed explicitly for natural language to Ansible code generation. In this paper, we describe the design and implementation of the Ansible Lightspeed service and analyze feedback from thousands of real users. We examine diverse performance indicators, classified according to both immediate and extended utilization patterns along with user sentiments. The analysis shows that the user acceptance rate of Ansible Lightspeed suggestions is higher than comparable tools that are more general and not specific to a programming language. This remains true even after we use much more stringent criteria for what is considered an accepted model suggestion, discarding suggestions which were heavily edited after being accepted. The relatively high acceptance rate results in higher-than-expected user retention and generally positive user feedback. This paper provides insights on how a comparatively small, dedicated model performs on a domain-specific language and more importantly, how it is received by users.

  3. 3
    مؤتمر

    المصدر: 2023 60th ACM/IEEE Design Automation Conference (DAC) Design Automation Conference (DAC), 2023 60th ACM/IEEE. :1-4 Jul, 2023

    Relation: 2023 60th ACM/IEEE Design Automation Conference (DAC)

  4. 4
    مؤتمر

    المصدر: 2023 International Conference on Sustainable Computing and Smart Systems (ICSCSS) Sustainable Computing and Smart Systems (ICSCSS), 2023 International Conference on. :1684-1689 Jun, 2023

    Relation: 2023 International Conference on Sustainable Computing and Smart Systems (ICSCSS)

  5. 5
    مؤتمر

    المصدر: 2023 7th International Conference on Intelligent Computing and Control Systems (ICICCS) Intelligent Computing and Control Systems (ICICCS), 2023 7th International Conference on. :1855-1859 May, 2023

    Relation: 2023 7th International Conference on Intelligent Computing and Control Systems (ICICCS)

  6. 6
    تقرير

    مصطلحات موضوعية: Computer Science - Cryptography and Security

    الوصف: Large Language Models (LLMs) have been suggested for use in automated vulnerability repair, but benchmarks showing they can consistently identify security-related bugs are lacking. We thus develop SecLLMHolmes, a fully automated evaluation framework that performs the most detailed investigation to date on whether LLMs can reliably identify and reason about security-related bugs. We construct a set of 228 code scenarios and analyze eight of the most capable LLMs across eight different investigative dimensions using our framework. Our evaluation shows LLMs provide non-deterministic responses, incorrect and unfaithful reasoning, and perform poorly in real-world scenarios. Most importantly, our findings reveal significant non-robustness in even the most advanced models like `PaLM2' and `GPT-4': by merely changing function or variable names, or by the addition of library functions in the source code, these models can yield incorrect answers in 26% and 17% of cases, respectively. These findings demonstrate that further LLM advances are needed before LLMs can be used as general purpose security assistants.
    Comment: Accepted for publication in IEEE Symposium on Security and Privacy 2024

  7. 7
    دورية أكاديمية
  8. 8
    دورية أكاديمية

    المؤلفون: Hoang, Summer, Pujar, Thejeswi, Bellorin-Font, Ezequiel, Edwards, John C., Miyata, Kana N.Aff1, IDs1373002300829z_cor5

    المصدر: CEN Case Reports: Official Publication of the Japanese Society of Nephrology. 13(3):194-198

  9. 9
    مؤتمر

    المصدر: 2023 Third International Conference on Artificial Intelligence and Smart Energy (ICAIS) Artificial Intelligence and Smart Energy (ICAIS), 2023 Third International Conference on. :466-470 Feb, 2023

    Relation: 2023 Third International Conference on Artificial Intelligence and Smart Energy (ICAIS)

  10. 10
    تقرير

    مصطلحات موضوعية: Computer Science - Computation and Language, I.2.7, I.2.5

    الوصف: Large language models (LLMs) have become remarkably good at improving developer productivity for high-resource programming languages. These models use two kinds of data: large amounts of unlabeled code samples for pre-training and relatively smaller amounts of labeled code samples for fine-tuning or in-context learning. Unfortunately, many programming languages are low-resource, lacking labeled samples for most tasks and often even lacking unlabeled samples. Therefore, users of low-resource languages (e.g., legacy or new languages) miss out on the benefits of LLMs. Cross-lingual transfer uses data from a source language to improve model performance on a target language. It has been well-studied for natural languages, but has received little attention for programming languages. This paper reports extensive experiments on four tasks using a transformer-based LLM and 11 to 41 programming languages to explore the following questions. First, how well does cross-lingual transfer work for a given task across different language pairs. Second, given a task and target language, how should one choose a source language. Third, which characteristics of a language pair are predictive of transfer performance, and how does that depend on the given task. Our empirical study with 1,808 experiments reveals practical and scientific insights, such as Kotlin and JavaScript being the most transferable source languages and different tasks relying on substantially different features. Overall, we find that learning transfers well across several programming languages.
    Comment: 15 pages, 9 figures, 8 tables