|
- Clever: A Curated Benchmark for Formally Verified Code Generation
We introduce CLEVER, the first curated benchmark for evaluating the generation of specifications and formally verified code in Lean The benchmark comprises of 161 programming problems; it evaluates both formal speci-fication generation and implementation synthesis from natural language, requiring formal correctness proofs for both
- CLEVER: A Curated Benchmark for Formally Verified Code Generation
This paper introduces CLEVER, a benchmark dataset designed to evaluate LLMs on formally verified code generation It consists of 161 carefully crafted Lean specifications derived from programming problems in the existing HumanEval dataset
- Forum - OpenReview
Promoting openness in scientific communication and the peer-review process
- The Clever Hans Mirage: A Comprehensive Survey on Spurious. . .
Back in the early 20th century, a horse named Hans appeared to perform arithmetic and other intellectual tasks during exhibitions in Germany, while it actually relied solely on involuntary cues in
- Evaluating the Robustness of Neural Networks: An Extreme Value. . .
Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness The proposed CLEVER score is attack-agnostic and is computationally feasible for large neural networks
- Contrastive Learning Via Equivariant Representation - OpenReview
In this paper, we revisit the roles of augmentation strategies and equivariance in improving CL's efficacy We propose CLeVER (Contrastive Learning Via Equivariant Representation), a novel equivariant contrastive learning framework compatible with augmentation strategies of arbitrary complexity for various mainstream CL backbone models
- Counterfactual Debiasing for Fact Verification
579 In this paper, we have proposed a novel counter- factual framework CLEVER for debiasing fact- checking models Unlike existing works, CLEVER is augmentation-free and mitigates biases on infer- ence stage In CLEVER, the claim-evidence fusion model and the claim-only model are independently trained to capture the corresponding information
- STAIR: Improving Safety Alignment with Introspective Reasoning
One common approach is training models to refuse unsafe queries, but this strategy can be vulnerable to clever prompts, often referred to as jailbreak attacks, which can trick the AI into providing harmful responses Our method, STAIR (SafeTy Alignment with Introspective Reasoning), guides models to think more carefully before responding
|
|
|