BlogNote (Total 2 articles)
BookNote (Total 1 articles)
DeepLearning (Total 2 articles)
MIT-molecular biology (Total 1 articles)
MeetingNote (Total 1 articles)
PaperNote (Total 214 articles)
Stanford-2022 AI Index Report
Nature-2021 Advancing mathematics by guiding human intuition with AI
ICML-2017 Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
OpenAI-2022 Robust Speech Recognition via Large-Scale Weak Supervision
A joint embedding of protein sequence and structure enables robust variant effect predictions
Bioinformatics-2023 Accurate and efficient protein sequence design through learning concise local environment of residues
bioRxiv-2023 Ankh:Optimized Protein Language Model Unlocks General-Purpose Modelling
PNAS-2021 Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences
ICLRw-2022 Convolutions are competitive with transformers for protein sequence pretraining
Nucleic Acids Research-2022 DeepLoc 2.0:multi-label subcellular localization prediction using protein language models
Bioinformatics-2021 DNABERT:pre-trained Bidirectional Encoder Representations from Transformers model for DNA-language in genome
Bioinformatics-2017 DeepLoc:prediction of protein subcellular localization using deep learning
International Journal of Molecular Sciences-2023 DeepTP:A Deep Learning Model for Thermophilic Protein Prediction
ICLR-2020 DivideMix:Learning with Noisy Labels as Semi-supervised Learning
ICML-2021 E(n) Equivariant Graph Neural Networks
Bioinformatics-2024 Embedding-based alignment:combining protein language models with dynamic programming alignment to detect structural similarities in the twilight-zone
Human Genetics-2022 Embeddings from protein language models predict conservation and variant effects
ICLRw-2023 Enhancing Protein Language Models with Structure-based Encoder and Pre-training
NIPS-2019 Evaluating Protein Transfer Learning with TAPE
Evaluating the representational power of pre-trained DNA language models for regulatory genomics
NIPS-2021 FLIP:Benchmark tasks in fitness landscape inference for proteins
Nature Biotechnology-2023 Fast and accurate protein structure search with FoldseekFast and accurate protein structure search with Foldseek
Fast protein structure searching using structure graph embeddings
Feature Reuse and Scaling:Understanding Transfer Learning with Protein Language Models
Nature-2021 Highly accurate protein structure prediction with AlphaFold
NIPS-2021 Language models enable zero-shot prediction of the effects of mutations on protein function
bioRxiv-2022 Language models of protein sequences at the scale of evolution enable accurate structure prediction
ICLR-2021 Learning from Protein Structure with Geometric Vector Perceptrons
Nature Machine Intelligence-2022 Learning functional properties of proteins with language models
ICML-2021 Learning inverse folding from millions of predicted structures
Nature Biotechnology-2022 Learning protein fitness models from evolutionary and assay-labeled data
Learning sequence, structure, and function representations of proteins with language models
Bioinformatics Advances-2021 Light attention predicts protein location from the language of life
ICML-2021 MSA Transformer
ACS Synthetic Biology-2019 Machine Learning Applied to Predicting Microorganism Growth Temperatures and Enzyme Catalytic Optima
bioRxiv-2022 Masked inverse folding with sequence transfer for protein representation learning
KDD-2021 Modeling Protein Using Large-scale Pretrain Language Model
Nature Biotechnology-2024 Multistate and functional protein design using RoseTTAFold sequence space diffusion
ICLR-2022 OntoProtein:Protein Pretraining With Gene Ontology Embedding
NIPS-2022 PEER:A Comprehensive and Multi-Task Benchmark for Protein Sequence Understanding
Nature Communications-2024 PLMSearch:Protein language model powers accurate and fast sequence search for remote homology
PoET:A generative model of protein families as sequences-of-sequences
TPAMI-2020 ProtTrans:Towards Cracking the Language of Life’s Code Through Self-Supervised Learning
Protein generation with evolutionary diffusion:sequence is all you need
Nature Biotechnology-2024 Protein remote homology detection and structural alignment using deep learning
NeurIPS-2023 ProteinGym:Large-Scale Benchmarks for Protein Design and Fitness Prediction
ICLR-2023 SaProt:Protein Language Modeling with Structure-aware Vocabulary
eLife-2024 Sensitive remote homology search by local alignment of small positional embeddings from protein language models
Simulating 500 million years of evolution with a language model
Nature Biotechnology-2022 Single-sequence protein structure prediction using a language model and deep learning
Nature Machine Intelligence-2023 Structure-inducing pre-training
Structure-informed Language Models Are Protein Designers
JCIM-2024 TM-search:An Efficient and Effective Tool for Protein Structure Database Search
bioRxiv-2021 Toward More General Embeddings for Protein Design:Harnessing Joint Representations of Sequence and Structure
ICML-2022 Tranception:protein fitness prediction with autoregressive transformers and inference-time retrieval
bioRxiv-2023 When Geometric Deep Learning Meets Pretrained Protein Language Models
Frontiers in Microbiology-2022 iThermo:A Sequence-Based Model for Identifying Thermophilic Proteins Using a Multi-Feature Fusion Strategy
NIPS-2017 Attention Is All You Need
NIPS-2022 Chain of Thought Prompting Elicits Reasoning in Large Language Models
ACL-2019 BERT:Pre-training of Deep Bidirectional Transformers for Language Understanding
AAAI-2018 Generative Adversarial Network for Abstractive Text Summarization
ICLR-2019 Graph2Seq:Graph to Sequence Learning with Attention-based Neural Networks
CVPR-2022 Grounded Language-Image Pre-trainin
arXiv-2022 Holistic Evaluation of Language Models
ACL-2018 Improving Abstraction in Text Summarization
NIPS-2018 Improving Language Understanding by Generative Pre-Training
ACL-2019 Improving Robustness of Neural Machine Translation with Multi-task Learning
OpenAI-2020 Language Models are Few-Shot Learners
ICML-2019 Language Models are Unsupervised Multitask Learners
ICLR-2022 Language-driven Semantic Segmentation
ACL-2021 Learning Sequential and Structural Information for Source Code Summarization
arXiv-2022 Learning code summarization from a small and local dataset
ICLR-2022 LoRA:Low-Rank Adaptation of Large Language Models
NAACL-2022 MoEBERT:from BERT to Mixture-of-Experts via Importance-Guided Adaptation
ICML-2019 Parameter-Efficient Transfer Learning for NLP
EMNLP-2021 Rethinking Data Augmentation for Low-Resource Neural Machine Translation:A Multi-Task Learning Approach
arXiv-2019 RoBERTa:A Robustly Optimized BERT Pretraining Approach
AAAI-2017 SeqGAN:Sequence Generative Adversarial Nets with Policy Gradient
NIPS-2014 Sequence to Sequence Learning with Neural Networks
ICLR-2021 Supervised Contrastive Learning for Pre-trained Language Model Fine-tuning
JMLR-2022 Switch Transformers:Scaling to Trillion Parameter Models with Simple and Efficient Sparsity
arXiv-2022 Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
OpenAI-2022 Training language models to follow instructions with human feedback
NIPS-2019 Unified Language Model Pre-training for Natural Language Understanding and Generation
CVPR-2018 A Closer Look at Spatiotemporal Convolutions for Action Recognition
ICML-2020 A Simple Framework for Contrastive Learning of Visual Representations
ICLR-2023 AIM:Adapting Image Models for Efficient Video Action Recognition
arXiv-2021 ActionCLIP:A New Paradigm for Video Action Recognition
NIPS-2021 Align before Fuse:Vision and Language Representation Learning with Momentum Distillation
ICCV-2021 An Empirical Study of Training Self-Supervised Vision Transformers
ICLR-2021 An Image is Worth 16x16 Words:Transformers for Image Recognition at Scale
GCPR-2021 AudioCLIP:Extending CLIP to Image, Text and Audio
ICML-2022 BLIP:Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
NIPS-2020 BYOL works even without batch statistics
CVPR-2015 Beyond Short Snippets:Deep Networks for Video Classification
NIPS-2020 Big Self-Supervised Models are Strong Semi-Supervised Learners
NIPS-2020 Bootstrap your own latent:A new approach to self-supervised Learning
arXiv-2021 CLIP4Clip:An Empirical Study of CLIP for End to End Video Clip Retrieval
SIGGRAPH-2022 CLIPasso:Semantically-Aware Object Sketching
DeepAI-2022 Can Language Understand Depth?
NIPS-2022 CoCa:Contrastive Captioners are Image-Text Foundation Models
ECCV-2022 Contrastive Deep Supervision
ECCV-2020 Contrastive Multiview Coding
CVPR-2016 Convolutional Two-Stream Network Fusion for Video Action Recognition
CVPR-2015 Deep Residual Learning for Image Recognition
ICCV-2021 Emerging Properties in Self-Supervised Vision Transformers
ECCV-2020 End-to-End Object Detection with Transformers
CVPR-2020 Exploring Simple Siamese Representation Learning
arXiv-2022 GLIPv2:Unifying Localization and Vision-Language Understanding
NIPS-2014 Generative Adversarial Nets
CVPR-2022 GroupViT:Semantic Segmentation Emerges from Text Supervision
OpenAI-2022 Hierarchical Text-Conditional Image Generation with CLIP Latents
arXiv-2021 How Much Can CLIP Benefit Vision-and-Language Tasks?
NIPS-2022 Image as a Foreign Language:BEiT Pretraining for All Vision and Vision-Language Tasks
NIPS-2012 ImageNet classification with deep convolutional neural networks
CVPR-2020 Improved Baselines with Momentum Contrastive Learning
NIPS-2021 Intriguing Properties of Vision Transformers
ICML-2021 Is Space-Time Attention All You Need for Video Understanding
CVPR-2014 Large-scale Video Classification with Convolutional Neural Networks
ICCV-2015 Learning Spatiotemporal Features with 3D Convolutional Networks
ICML-2021 Learning Transferable Visual Models From Natural Language Supervision
CVPR-2021 Masked Autoencoders Are Scalable Vision Learners
WACV-2023 MixGen:A New Multi-Modal Data Augmentation
CVPR-2020 Momentum Contrast for Unsupervised Visual Representation Learning
CVPR-2018 Non-local Neural Networks
ICLR-2022 Open-vocabulary Object Detection via Vision and Language Knowledge Distillation
ICML-2022 Perceiver IO:A General Architecture for Structured Inputs & Outputs
ICML-2021 Perceiver:General Perception with Iterative Attention
CVPR-2022 PointCLIP:Point Cloud Understanding by CLIP
CVPR-2017 Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset
NIPS-2019 Representation Learning with Contrastive Predictive Coding
ICCV-2019 SlowFast Networks for Video Recognition
ICCV-2021 Swin Transformer:Hierarchical Vision Transformer using Shifted Windows
arXiv-2022 T2I-Adapter:Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models
ECCV-2016 Temporal Segment Networks:Towards Good Practices for Deep Action Recognition
NIPS-2014 Two-Stream Convolutional Networks for Action Recognition in Videos
Blog-2020 Understanding Self-Supervised and Contrastive Learning with "Bootstrap Your Own Latent"(BYOL)
CVPR-2019 Unsupervised Embedding Learning via Invariant and Spreading Instance Feature
CVPR-2018 Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination
NIPS-2020 Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
NIPS-2021 VLMo:Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts
ICML-2021 ViLT:Vision-and-Language Transformer Without Convolution or Region Supervision
TNNLS-2019 A Comprehensive Survey on Graph Neural Networks
ICML-2020 Contrastive Multi-View Representation Learning on Graphs
NIPS-2020 Deep Graph Contrastive Representation Learning
KDD-2020 GCC:Graph Contrastive Coding for Graph Neural Network Pre-Training
NIPS-2020 Graph Contrastive Learning with Augmentations
Neural Subgraph Matching
NIPS-2022 A Neural Corpus Indexer for Document Retrieval
ACM Computing Surveys-2022 Deep Meta-learning in Recommendation Systems:A Survey
SIGIR-2020 How to Retrain Recommender System?A Sequential Meta-Learning Method
KDD-2019 MeLU:Meta-Learned User Preference Estimator for Cold-Start Recommendation
NIPS-2022 Transformer Memory as a Differentiable Search Index
DLRS-2016 Wide & Deep Learning for Recommender Systems
NIPS-2019 GPipe:Efficient Training of Giant Neural Networks using Pipeline Parallelism
GTC-2020 Megatron-LM:Training Multi-Billion Parameter Language Models Using Model Parallelism
Google-2022 Pathways:Asynchronous Distributed Dataflow for ML
OSDI-2014 Scaling Distributed Machine Learning with the Parameter Server
SC-2020 ZeRO:Memory Optimizations Toward Training Trillion Parameter Models
ICML-2016 A Convolutional Attention Network for Extreme Summarization of Source Code
ICPC-2021 A Multi-Modal Transformer-based Code Summarization Approach for Smart Contracts
ICSE-2019 A Neural Model for Generating Natural Language Summaries of Program Subroutines
ICSE-2019 A Novel Neural Source Code Representation Based on Abstract Syntax Tree
arXiv-2022 A Survey of Deep Learning Models for Structural Code Understanding
ACL-2020 A Transformer-based Approach for Source Code Summarization
IJCNLP-2017 A parallel corpus of Python functions and documentation strings for automated code documentation and code generation
ICSE-2022 AST-trans:code summarization with efficient tree-structured attention
JSS-2022 Automatic source code summarization with graph attention networks
ASE-2020 Automating just-in-time comment updating
NAACL-2022 CODE-MVP:Learning to Represent Source Code from Multiple Views with Contrastive Pre-Training
ACL-2021 Code Summarization with Structure-induced Transformer
EMNLP-2020 CodeBERT:A Pre-Trained Model for Programming and Natural Languages
EMNLP-2021 CodeT5:Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation
IJCAI-2019 Commit Message Generation for Source Code Changes
DeepMind-2022 Competition-Level Code Generation with AlphaCode
ACL-2020 Contrastive Code Representation Learning
ICSE-2022 Cross-Domain Deep Code Search with Few-Shot Meta Learning
ICPC-2018 Deep code comment generation
ICSE-2018 Deep code search
ICSME-2021 Ensemble Models for Neural Source Code Summarization of Subroutines
ICML-2021 Evaluating Large Language Models Trained on Code
IJCAI-2021 Graph-Augmented Code Summarization in Computational Notebooks
ICML-2020 Graph-based, Self-Supervised Program Repair from Diagnostic Feedback
ICLR-2021 GraphCodeBERT:Pre-training Code Representations with Data Flow
ACL-2021 HAConvGNN:Hierarchical Attention Based Convolutional Graph Neural Network for Code Documentation Generation in Jupyter Notebooks
ICPC-2022 HELoC:Hierarchical Contrastive Learning of Source Code Representation
AAAI-2022 Hierarchical Heterogeneous Graph Attention Network for Syntax-Aware Summarization
ACL-2022 Impact of Evaluation Methodologies on Code Summarization
MSR-2020 Improved Automatic Summarization of Subroutines via Attention to File Context
ICPC-2020 Improved Code Summarization via a Graph Neural Network
ACL-2015 Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
ICPC-2021 Improving Code Summarization with Block-wise Abstract Syntax Tree Splitting
ICSE-2021 InferCode:Self-Supervised Learning of Code Representations by Predicting Subtrees
ICLR-2021 Language-Agnostic Representation Learning of Source Code from Structure and Context
ICLR-2018 Learning to Represent Programs with Graphs
ESEM-2022 MMF3:Neural Code Summarization Based on Multi-Modal Fine-Grained Feature Fusion
ACL-2022 Modeling Hierarchical Syntax Structure with Triplet Position for Source Code Summarization
ICSE-2022 On the Evaluation of Neural Code Summarization
ISSRE-2021 Peculiar:Smart Contract Vulnerability Detection Based on Crucial Data Flow Graph and Pre-training Techniques
ICPC-2021 Project-Level Encoding for Neural Source Code Summarization of Subroutines
ICLR-2021 Retrieval-Augmented Generation for Code Summarization via Hybrid GNN
SIGIR-2021 Self-Supervised Contrastive Learning for Code Retrieval and Summarization via Semantic-Preserving Transformations
ICPC-2022 Self-Supervised Learning of Smart Contract Representations
ICLR-2019 Structured Neural Summarization
ACL-2016 Summarizing Source Code using a Neural Attention Model
AAAI-2021 SynCoBERT:Syntax-Guided Multi-Modal Contrastive Pre-Training for Code Representation
OpenAI-2022 Text and Code Embeddings by Contrastive Pre-Training
ACL-2020 Towards Context-Aware Code Comment Generation
ACL-2022 UniXcoder:Unified Cross-Modal Pre-training for Code Representation
NAACL-2021 Unified Pre-training for Program Understanding and Generation
NIPS-2020 Unsupervised Translation of Programming Languages
ICPC-2022 Zero-Shot Program Representation Learning
ICLR-2019 code2seq:Generating Sequences from Structured Representations of Code
POPL-2019 code2vec:Learning Distributed Representations of Code
stanford-cs329p (Total 27 articles)
1.1 课程介绍【stanford-cs329p】
1.2 数据获取【stanford-cs329p】
1.3 网页数据抓取【stanford-cs329p】
1.4 数据标注【stanford-cs329p】
10.1 深度神经网络架构【stanford-cs329p】
2.1 探索性数据分析【stanford-cs329p】
2.2 数据清理【stanford-cs329p】
2.3 数据变换【stanford-cs329p】
2.4 特征工程【stanford-cs329p】
2.5 数据科学家的日常【stanford-cs329p】
3.1 8分钟机器学习介绍【stanford-cs329p】
3.2 最简单也最常用的决策树【stanford-cs329p】
3.3 最简单也同样最常用的线性模型【stanford-cs329p】
3.4 随机梯度下降【stanford-cs329p】
3.5 多层感知机【stanford-cs329p】
3.6 卷积神经网络【stanford-cs329p】
3.7 循环神经网络【stanford-cs329p】
4.1 模型评估【stanford-cs329p】
4.2 过拟合和欠拟合【stanford-cs329p】
4.3 模型验证【stanford-cs329p】
5.1 方差和偏差【stanford-cs329p】
5.2 Bagging【stanford-cs329p】
5.3 Boosting【stanford-cs329p】
5.4 Stacking【stanford-cs329p】
9.1 模型调参【stanford-cs329p】
9.2 超参数优化【stanford-cs329p】
9.3 网络架构搜索【stanford-cs329p】