Limited time: Get 2 months free with annual plan — Claim offer →
Certifications Tools Flashcards Career Paths Exam Guides Blog Pricing
Start for free
Exam GuidesGoogle/TensorFlowTF-Dev
Google/TensorFlowAssociate Level2026 Updated

TensorFlow Developer Certificate

Updated May 1, 202612 min readWritten by Certsqill experts
Quick facts — TF-Dev
Exam cost
$100
Questions
Performance-based (5 coding tasks)
Time limit
300 minutes (5 hours)
Passing score
Pass/fail graded on each of 5 tasks by automated test suite
Valid for
3 years
Testing
Remote proctored via JetBrains IDE (PyCharm)

Who this exam is for

The TensorFlow Developer Certificate certification is designed for professionals who work with or want to work with Google/TensorFlow technologies in a professional capacity. It is taken by cloud engineers, DevOps practitioners, IT administrators, and technical professionals looking to validate their expertise.

You do not need extensive prior experience to attempt it, but you will benefit from hands-on familiarity with the subject matter. The exam tests applied knowledge and architectural judgment, not just memorization. If you can reason about trade-offs and real-world scenarios, structured practice will handle the rest.

Domain breakdown

The TF-Dev exam is built around official domains, each with a fixed percentage of the question pool. This distribution should directly inform how you allocate your study time.

Domain
Weight
Focus areas
TensorFlow Fundamentals
10%
TensorFlow 2.x eager execution, tensor creation and operations (tf.constant, tf.Variable, tf.zeros, tf.ones), tf.data pipeline (from_tensor_slices, map, batch, shuffle, prefetch), tf.function for graph execution, saving models (SavedModel format, HDF5 .h5).
Building & Training Neural Networks with Keras
30%
Keras Sequential API (model.add), Functional API (tf.keras.Input, model = tf.keras.Model), custom Layer subclasses, optimizers (Adam, SGD with momentum, RMSprop), loss functions (categorical_crossentropy, sparse_categorical_crossentropy, binary_crossentropy, MSE), callbacks (ModelCheckpoint, EarlyStopping, LearningRateScheduler, ReduceLROnPlateau).
Image Classification with CNNs
20%
Conv2D (filters, kernel_size, padding, activation), MaxPooling2D, GlobalAveragePooling2D, Flatten, Dense, BatchNormalization, Dropout, ImageDataGenerator with augmentation, flow_from_directory, transfer learning (MobileNetV2/ResNet50/EfficientNetV2 with include_top=False), fine-tuning by unfreezing layers.
NLP & Sequence Modeling
20%
Tokenizer (fit_on_texts, texts_to_sequences, word_index), pad_sequences (maxlen, padding, truncating), TextVectorization layer, Embedding layer, SimpleRNN, LSTM (return_sequences for stacking), GRU, Bidirectional wrapper, 1D CNN for text (Conv1D, GlobalMaxPooling1D).
Time Series & Generative Models
20%
Windowed dataset creation from univariate time series (tf.data sliding window or NumPy), LSTM/GRU for forecasting (stateful vs stateless), Lambda layers for preprocessing (tf.expand_dims, scaling), MAE and MSE evaluation, and basic sequence generation with temperature sampling.

Note the domain with the highest weight — many candidates under-invest here because it feels conceptual. In practice, this is where the exam is most precise, with scenario-based questions that test specifics.

What the exam actually tests

This is not a memorization exam. Questions require applied judgment under constraints. Almost every question includes a scenario with explicit requirements and asks you to select the most appropriate solution.

Here are examples of the question types you will encounter:

CNN Image Classification Task
Given a labeled image dataset of 5 categories (2,000 images per class), build and train a CNN using TF/Keras that achieves greater than 85% validation accuracy. Apply data augmentation and save the best model checkpoint.
Build Conv2D(32, (3,3), activation="relu") > MaxPooling2D(2,2) > Conv2D(64, (3,3), activation="relu") > MaxPooling2D(2,2) > Flatten > Dense(512, relu) > Dense(5, softmax). Use ImageDataGenerator for augmentation, ModelCheckpoint(save_best_only=True) callback.
Transfer Learning Fine-Tuning Task
Using MobileNetV2 pretrained on ImageNet, build a binary classifier for cats vs dogs. First train with the base frozen, then unfreeze the last 30 layers and fine-tune with a learning rate of 1e-5 for 5 more epochs.
base_model = MobileNetV2(include_top=False, input_shape=(224,224,3)); base_model.trainable = False; add GlobalAveragePooling2D + Dense(1, sigmoid). After initial training: base_model.trainable = True; freeze all layers except last 30; compile with lr=1e-5; fit again.
LSTM Time Series Forecasting Task
Given a univariate time series of daily temperatures, build an LSTM model that predicts the next 1 day from a window of 30 days. Normalize the data before training and denormalize predictions for evaluation. Report MAE on the test set.
Normalize: (x - mean) / std. Build windowed dataset: X = series[i:i+30], y = series[i+30]. Model: LSTM(64, return_sequences=True) > LSTM(32) > Dense(1). Predict then denormalize: y_pred * std + mean. Compute MAE = tf.keras.losses.MAE(y_true, y_pred).

How to prepare — 4-week study plan

This plan assumes one hour per weekday and roughly 30 minutes of lighter review on weekends. It is calibrated for someone with some relevant experience. If you are starting from zero, add an extra week before Week 1 to familiarise yourself with the basics.

W1
Week 1: TF Fundamentals & Dense Neural Networks
  • Set up PyCharm with Python virtual environment and TensorFlow 2.x — practice using only PyCharm, not Jupyter, since the exam is in PyCharm IDE
  • Master Keras Sequential API: build a multi-layer Dense network, compile with Adam optimizer and categorical crossentropy, fit with validation_data, evaluate on test set
  • Learn Keras Functional API: inputs = tf.keras.Input(shape=(28,28)), x = Flatten()(inputs), x = Dense(128, activation="relu")(x), outputs = Dense(10, activation="softmax")(x), model = tf.keras.Model(inputs, outputs)
  • Practice all callbacks: ModelCheckpoint(filepath, monitor="val_accuracy", save_best_only=True), EarlyStopping(monitor="val_loss", patience=3, restore_best_weights=True), LearningRateScheduler(lambda epoch, lr: lr * 0.9 if epoch > 10 else lr)
W2
Week 2: CNNs & Image Classification
  • Build CNNs from scratch: implement Conv2D > MaxPooling2D > Conv2D > MaxPooling2D > GlobalAveragePooling2D > Dense pattern, understand receptive field and feature map dimensions
  • Master ImageDataGenerator: rescale=1./255, rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, validation_split=0.2
  • Implement transfer learning: load base model with include_top=False and weights="imagenet", add GlobalAveragePooling2D and Dense classification head, set base_model.trainable=False, compile and train
  • Practice fine-tuning: set base_model.trainable=True, iterate through base_model.layers and set trainable=False for all except last N layers, recompile with very low learning rate (1e-5), train for 10 more epochs
W3
Week 3: NLP & Sequence Models
  • Master Tokenizer workflow: tokenizer = Tokenizer(num_words=10000, oov_token="<OOV>"); tokenizer.fit_on_texts(train_texts); sequences = tokenizer.texts_to_sequences(texts); padded = pad_sequences(sequences, maxlen=120, padding="post", truncating="post")
  • Build RNN architectures for text: Embedding(vocab_size, 64, input_length=maxlen) > LSTM(64) > Dense(1, sigmoid) for binary; for stacking: LSTM(64, return_sequences=True) > LSTM(32) > Dense
  • Study Bidirectional LSTM: model.add(Bidirectional(LSTM(64))) wraps LSTM to process sequence forward and backward, doubling the output units; useful for text classification where context from both directions matters
  • Implement 1D CNN for text: Embedding > Conv1D(128, 5, activation="relu") > GlobalMaxPooling1D() > Dense(64, relu) > Dense(1, sigmoid) — faster than LSTM, often comparable accuracy on classification
W4
Week 4: Time Series & Full Exam Simulation
  • Build windowed time series dataset: def windowed_dataset(series, window_size, batch_size, shuffle_buffer): dataset = tf.data.Dataset.from_tensor_slices(series); dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True); dataset = dataset.flat_map(lambda w: w.batch(window_size + 1)); dataset = dataset.shuffle(shuffle_buffer).map(lambda w: (w[:-1], w[-1])); return dataset.batch(batch_size).prefetch(1)
  • Implement LSTM forecasting: Lambda(lambda x: tf.expand_dims(x, axis=-1)) > LSTM(64, return_sequences=True) > LSTM(32) > Dense(1) > Lambda(lambda x: x * 100.0) for scaling — practice the Lambda scaling pattern
  • Run two complete 5-hour timed mock exams in PyCharm: implement all 5 task types (dense NN, CNN, transfer learning, NLP, time series), save files in expected output locations, verify accuracy thresholds are met
  • Time each task: target 45-50 min per task, leaving 10-15 min buffer. Identify your slowest task type and practice it until you can complete it reliably in under 45 minutes including training time

Common mistakes candidates make

These patterns appear repeatedly among candidates who resit this exam. Knowing them in advance is worth several percentage points.

Not practicing in PyCharm IDE before the exam
The exam is taken in PyCharm with internet access restricted to TensorFlow documentation only. Candidates who practice exclusively in Jupyter notebooks struggle with PyCharm's project structure, script execution (Shift+F10), virtual environment activation, and debugging workflow. Spend at least 2 weeks practicing full tasks in PyCharm before the exam.
Not knowing the Keras Functional API
The Sequential API cannot express models with multiple inputs, multiple outputs, or skip connections (residual connections). Exam tasks may require architectures that the Sequential API cannot represent. Know the Functional API: Input() > layer()(prev_output) > layer()(prev_output) > Model(inputs, outputs). Practice building the same model both ways.
Missing ModelCheckpoint and EarlyStopping callbacks
Almost every exam task requires saving the best model based on validation metric, not the final model. ModelCheckpoint(save_best_only=True, monitor="val_accuracy") is effectively required in every task. EarlyStopping(restore_best_weights=True) prevents overfitting on small datasets. Not using these callbacks costs points even when the accuracy threshold is otherwise met.
Underestimating training time per task
With 5 tasks in 300 minutes, you have approximately 60 minutes per task. Transfer learning tasks (load base model, train head, fine-tune) and LSTM time series tasks can take 40-50 minutes including actual training time on the exam machine. Build lightweight architectures first (fewer filters, smaller LSTM units), verify they train correctly, then increase capacity only if accuracy threshold is not met.

Is Certsqill right for you?

Honestly: Certsqill is built for candidates who have already done some studying and want to convert knowledge into exam performance. If you have never touched the subject, start with a foundational course first — then come to Certsqill when you are ready to practice.

Where Certsqill is strong: question depth, AI-powered explanations, and domain analytics. Every question is mapped to the exam blueprint. When you get something wrong, the AI tutor explains why the right answer is right and why each wrong answer fails under the specific constraints in the question.

Where Certsqill is not a replacement: video courses and hands-on labs. Use Certsqill to test and sharpen — not as your first exposure to a topic you have never encountered.

Ready to start practicing?
560 TF-Dev questions. AI tutor. 4 mock exams. 7-day free trial.