csespandey's profile picture.

Shubham Pandey

@csespandey

Just wrapped up a great week in ICIP 2025 @ Anchorage, Alaska presenting our paper about Matching cross-domain fingerprint images using contrastive training. It was great getting inspired from brilliant minds. #ICIP2025

csespandey's tweet image. Just wrapped up a great  week in ICIP 2025 @ Anchorage, Alaska presenting our paper about  Matching cross-domain fingerprint images using contrastive training. It  was great getting inspired from brilliant minds.

#ICIP2025
csespandey's tweet image. Just wrapped up a great  week in ICIP 2025 @ Anchorage, Alaska presenting our paper about  Matching cross-domain fingerprint images using contrastive training. It  was great getting inspired from brilliant minds.

#ICIP2025
csespandey's tweet image. Just wrapped up a great  week in ICIP 2025 @ Anchorage, Alaska presenting our paper about  Matching cross-domain fingerprint images using contrastive training. It  was great getting inspired from brilliant minds.

#ICIP2025

Shubham Pandey 님이 재게시함

✨ Today we're launching AI coding features to our Pro+ subscribers located in the United States, including natural language to code generation, code completion, and an integrated chatbot. We'd love to hear from you with positive examples of AI coding in Colab: please share! 1/


Shubham Pandey 님이 재게시함

Top ML Papers of the Week (June 5-11): - AlphaDev - MusicGen - Fine-Grained RLHF - Humor in ChatGPT - Concept Scrubbing in LLM - Augmenting LLMs with Databases ...


Shubham Pandey 님이 재게시함
elonmusk's tweet image.

Shubham Pandey 님이 재게시함

A neural net is made of simple building blocks. Learning how the output gradient is backpropagated through these basic components helps us understand how each part contributes to the final model performance. Below we see how the node & sum complimentary modules behave.

alfcnz's tweet image. A neural net is made of simple building blocks.
Learning how the output gradient is backpropagated through these basic components helps us understand how each part contributes to the final model performance.
Below we see how the node & sum complimentary modules behave.

Shubham Pandey 님이 재게시함

One year from now, what size LLM (large language model) do you think will be used for the most inferences?


United States 트렌드

Loading...

Something went wrong.


Something went wrong.